CubenSquare Tech

Devops L2 Q&A

SET – 1 1. What is DevOps, and why is it important? DevOps is a set of practices that combines software development (Dev) and IT operations (Ops). It aims to shorten the software development life cycle and provide continuous delivery with high software quality. 2. Can you explain the CI/CD pipeline and its components? CI (Continuous Integration) is a practice where developers frequently merge code into a shared repository. CD (Continuous Deployment) automates the deployment of new changes. Key components are:  Source Control: Git, SVN  Build Automation: Jenkins, CircleCI  Test Automation: Selenium, JUnit  Deployment Automation: Ansible, Kubernetes 3. What is Infrastructure as Code (IaC), and why is it used in DevOps? IaC refers to managing infrastructure through code, allowing teams to automate the provisioning and configuration of environments. Tools include Terraform, AWS CloudFormation, and Ansible. 4. What is the difference between Ansible, Puppet, and Chef? All three are configuration management tools. Ansible uses an agentless architecture and is simpler to set up, Puppet uses a master-agent architecture, and Chef is built around Ruby and offers a powerful DSL for defining infrastructure. 5. How do you implement blue-green deployment? Blue-green deployment minimizes downtime and reduces risks by running two identical production environments (blue and green). Traffic is routed to the green environment after validation, while blue remains as a backup. 6. Explain how you would set up a monitoring and alerting system for production? Use tools like Prometheus and Grafana for monitoring. Set up alerting rules based on thresholds (e.g., CPU usage, memory, response times) and integrate with services like PagerDuty or Slack for real-time alerts. 7. What is a Dockerfile? Can you walk through a basic Dockerfile? A Dockerfile is a script that contains instructions to build a Docker image. Basic Dockerfile example: Dockerfile FROM node:14 WORKDIR /app RUN npm install CMD [“npm”, “start”] 8. How do you ensure security in DevOps?  Security can be implemented using the following:  Static code analysis: Tools like SonarQube  Secret management: Vault, AWS Secrets Manager  Compliance checks: Using tools like OpenSCAP or Chef Inspec 9. Can you explain Git branching strategies?  Feature Branching: Separate branches for features  Gitflow: Structured flow with master, develop, and feature branches  Trunk-Based Development: Minimal branches, merging frequently into trunk 10. How do you handle configuration management in a microservices architecture? Centralized configuration management tools like Spring Cloud Config or Consul can be used to manage configuration files for all services in one place. SET – 2 1. What is container orchestration, and why is Kubernetes popular? Container orchestration automates the deployment, scaling, and management of containerized applications. Kubernetes is popular due to its powerful features like automated scaling, self-healing, and service discovery. 2. What are namespaces in Kubernetes, and why are they used? Namespaces provide a way to segment a Kubernetes cluster into virtual clusters. They help in organizing and isolating resources between teams or environments. 3. How do you optimize a CI/CD pipeline for faster deployments?  Parallelizing tasks  Caching dependencies  Using lightweight containers  Limiting unnecessary test runs 4. What’s the difference between containers and virtual machines (VMs)? Containers share the host OS and are more lightweight, while VMs run their own OS and are more resource-intensive. 5. What is a reverse proxy, and why is it used in a DevOps setup? A reverse proxy forwards client requests to backend servers, improving security, performance, and load balancing. Nginx and HAProxy are popular reverse proxy servers. 6. What is Helm in Kubernetes? Helm is a package manager for Kubernetes that allows you to define, install, and upgrade even the most complex Kubernetes applications. 7. What is the use of service mesh in microservices? A service mesh manages communication between microservices. Istio and Linkerd are popular tools that provide observability, traffic management, and security features. 8. What is the difference between Continuous Delivery and Continuous Deployment? Continuous Delivery ensures code is always in a deployable state, while Continuous Deployment automates the release process to production. 9. What are some common challenges with microservices?  Common challenges include:  Complex inter-service communication  Distributed data management  Monitoring and logging across services 10. How do you handle secrets in a CI/CD pipeline? Use secret management tools like HashiCorp Vault, AWS Secrets Manager, or environment variables encrypted with tools like Jenkins Credentials Plugin. SET – 3 1. What is canary deployment, and when would you use it? Canary deployment releases a new version of an application to a small subset of users. It’s useful when testing a new feature or mitigating risk during production deployments. 2. Explain the concept of “shift left” in DevOps. “Shift left” means moving testing, security, and performance evaluation earlier in the software development lifecycle to identify issues sooner. 3. What’s the difference between stateful and stateless applications? Stateless applications do not retain any data between requests, while stateful applications store data across multiple sessions or requests. 4. How do you implement High Aailability (HA) in your infrastructure? Use techniques like load balancing, auto-scaling, database replication, and multi- region deployments to ensure high availability. 5. What is a deployment strategy you would use for zero downtime? Blue-green deployment or rolling updates with Kubernetes ensure zero downtime during deployments. 6. What are Kubernetes pods, and how do they differ from containers? A pod is the smallest deployable unit in Kubernetes, which can contain one or more containers that share storage and network resources. 7. Explain how you would secure a Kubernetes cluster.  Use Role-Based Access Control (RBAC)  Enable mutual TLS for service communication  Use network policies to control traffic between pods 8. What are Jenkins pipelines? Jenkins pipelines define a series of steps to automate the CI/CD process using code (Pipeline as Code). It supports complex workflows and parallel task execution. 9. How do you handle rollbacks in case of a failed deployment? Tools like Kubernetes and Helm have built-in rollback features. Additionally, using feature flags or storing previous versions of

Devops L2 Q&A Read More »

Devops L3 Q&A

SET – 1 1. How would you design a scalable and resilient CI/CD pipeline for a multi-region microservices architecture?  Using distributed build agents in regions to reduce latency.  Global load balancers for distributing traffic across services.  Implementing multi-region artifact repositories (e.g., Nexus, Artifactory).  Automating deployments using GitOps with multi-region clusters. Adding canary deployments and auto-scaling features to ensure zero downtime. 2. How do you handle infrastructure drift in a cloud environment, and what tools would you use?  Infrastructure drift occurs when manual changes are made outside of IaC tools, causing discrepancies.  Use tools like Terraform or Pulumi for managing drift by detecting changes in state and applying corrective actions. Implement policy as code with tools like Open Policy Agent (OPA) to ensure compliance with defined infrastructure standards. 3. Can you walk through the design of a High-Availability (HA) Kubernetes cluster across multiple regions?  Use multi-master clusters with etcd distributed across regions. Set up cross- region load balancers (e.g., AWS Global Accelerator).  Utilize Persistent Volume Claims (PVCs) and object storage (e.g., S3) for distributed data storage.  Implement horizontal scaling with auto-scaling policies and node affinity for region-specific pods. 4. How do you handle Disaster Recovery (DR) in a microservices environment?  Use multi-region deployments with data replication (e.g., RDS Read Replicas).  Maintain backups and point-in-time restores for databases. Implement a runbook for failover strategies.  Use chaos engineering tools like Gremlin or Chaos Monkey to simulate failures and test DR capabilities. 5. How would you implement security at various stages of a DevOps pipeline?  Pre-commit: Use static code analysis and tools like SonarQube.  Build: Scan dependencies for vulnerabilities using Snyk or OWASP Dependency-Check.  Pre-deploy: Container security scanning using Aqua, Twistlock, or Clair.  Post-deploy: Monitor for security anomalies using Falco or AWS GuardDuty. 6. What strategies would you use to handle scaling in a hybrid cloud environment?  Implement autoscaling policies for both on-prem and cloud workloads using a mix of Kubernetes Cluster Autoscaler and cloud-native auto-scaling (AWS, Azure, GCP).  Use service mesh tools like Istio to manage network traffic and routing between on-prem and cloud environments. Implement cost-based scaling to optimize resource allocation based on cloud provider pricing models. 7. What’s your approach to ensuring zero downtime during major infrastructure changes?  Use blue-green or canary deployments to safely roll out changes. Leverage feature toggles to switch between new and old infrastructure.  Use tools like Kubernetes Rolling Updates and ensure proper health checks for services. 8. How would you ensure observability in a complex system with multiple microservices?  Implement distributed tracing using tools like Jaeger or OpenTelemetry to track requests across services. Set up centralized logging with the ELK stack or Fluentd.  Implement metrics monitoring with Prometheus and visualize it using Grafana dashboards. Use correlation IDs to track a single request across multiple services for easier debugging. 9. Explain how you would secure container images and the registry.  Use tools like Clair or Trivy for scanning container images for vulnerabilities.  Sign images with Docker Content Trust or Notary. Implement role-based access control (RBAC) in the registry to limit who can push/pull images.  Enforce TLS for registry communication and use private registries like Harbor for secure storage. 10. What is your approach to managing secrets in a distributed environment?  Use secret management tools like HashiCorp Vault, AWS Secrets Manager, or Azure Key Vault.  Ensure secrets are not hardcoded and are injected into applications at runtime using environment variables or mounted files. Rotate secrets regularly and apply auditing to ensure no unauthorized access. SET – 2 1. What are some strategies for optimizing cost in cloud-based DevOps pipelines? Use spot instances or reserved instances for non-production workloads. Right-size VMs and containers based on usage patterns.Implement auto-scaling to match capacity with demand. Use tools like AWS Cost Explorer or Google Cloud Pricing Calculator to monitor and optimize cloud spend. 2. What are the key differences between event-driven architecture and traditional request-response architecture in a microservices setup?  Event-driven architecture: Services communicate via asynchronous events, allowing decoupled and highly scalable systems. Examples include Kafka and RabbitMQ.  Request-response architecture: Services directly communicate synchronously, which can lead to tight coupling and higher latency but is easier to debug. 3. How do you handle scaling of stateful applications in Kubernetes? Use StatefulSets for stateful applications that require unique network IDs and persistent storage.Implement volume replication and multi-zone Persistent Volumes. Utilize Kubernetes storage classes with cloud provider-backed storage (e.g., AWS EBS, GCP Persistent Disks). 4. How would you implement a GitOps workflow for infrastructure management? Use Git as the single source of truth for both application code and infrastructure code (IaC). Implement tools like ArgoCD or Flux to automatically deploy changes from the Git repository to the Kubernetes cluster. Ensure changes are reviewed and approved via pull requests before they are merged and deployed. 5. How would you design a multi-tenant Kubernetes environment? Use namespaces to isolate workloads for different tenants. Implement network policies to restrict communication between tenant namespaces. Use RBAC to ensure only authorized users can manage resources within their own namespaces. Set up resource quotas to limit the amount of CPU, memory, and storage available to each tenant. 6. What strategies would you use to monitor and debug networking issues in a Kubernetes cluster? Use Kubernetes network policies to enforce rules on pod communication and isolate network traffic. Implement CNI plugins like Calico or Weave for managing pod network traffic. Debug using tcpdump, kubectl exec to ping pods, and network visualization tools like Kiali for tracing service mesh traffic. 7. How do you ensure observability in a serverless architecture? Implement distributed tracing with AWS X-Ray or Google Cloud Trace for serverless functions. Use centralized logging systems like CloudWatch or Stackdriver Logging. Monitor function performance and trigger rates with metrics using Prometheus, Datadog, or cloud-native monitoring services. 8. What’s your approach to handling multi-cloud DevOps environments? Use tools like Terraform or Pulumi

Devops L3 Q&A Read More »

Openshift Q&A

SET – 1 1. What is OpenShift? OpenShift is an open-source container application platform based on Kubernetes. It helps developers develop, deploy, and manage containerized applications. 2. What are the key components of OpenShift? Master: Manages nodes and orchestrates the deployment of containers. Nodes: Run containers and handle workloads. ETCD: Stores cluster configuration data. OpenShift API: Handles API calls. 3. How does OpenShift differ from Kubernetes? OpenShift extends Kubernetes by adding features such as a web console, a built-in CI/CD pipeline, multi-tenant security, and developer tools. It also has stricter security policies. 4. What is Source-to-Image (S2I) in OpenShift? S2I is a process that builds Docker images directly from application source code, making it easier to deploy apps without writing a Dockerfile. It automatically builds a container from source code and deploys it in OpenShift. 5. Explain the difference between DeploymentConfig and Deployment in OpenShift. DeploymentConfig is specific to OpenShift and offers additional control over deployment strategies, hooks, and triggers, whereas Deployment is a Kubernetes native resource for deploying containerized apps. 6. How does OpenShift manage storage and persistent volumes? OpenShift uses Persistent Volume (PV) and Persistent Volume Claim (PVC) to provide dynamic and static storage for containerized applications. It supports different storage backends like NFS, AWS EBS, and GlusterFS. 7. How do you handle multi-tenancy and security in OpenShift? OpenShift uses Role-Based Access Control (RBAC), Security Context Constraints (SCC), and Network Policies to handle multi-tenancy. SCCs define the security rules for pods, and RBAC defines access control based on user roles. 8. Explain how you would implement CI/CD pipelines in OpenShift. OpenShift has a native Jenkins integration for automating CI/CD pipelines. It can be set up using OpenShift’s BuildConfigs and Jenkins Pipelines to automate testing, building, and deploying applications. 9. What is OpenShift Operator Framework, and why is it important? The Operator Framework in OpenShift automates the deployment, scaling, and lifecycle management of Kubernetes applications. It allows applications to be managed in the same way Kubernetes manages its components. 10. How would you design a highly available OpenShift cluster across multiple regions? Use a multi-region architecture with disaster recovery features. Utilize load balancers (like F5 or HAProxy), configure etcd clusters for consistency, and use persistent storage replicated across regions. Also, use Cluster Federation for managing multiple clusters. SET – 2 1. What is an OpenShift project, and how is it used? An OpenShift project is a logical grouping of resources, such as applications, builds, and deployments. It provides a way to organize and manage resources within a cluster. 2. How do you secure an OpenShift cluster? Implementing RBAC to limit access. Using Network Policies to control traffic between pods. Enabling SELinux and Security Context Constraints to enforce pod-level security. Encrypting sensitive data in etcd and using TLS for securing communication. 3. How would you perform an OpenShift cluster upgrade? Plan upgrades by checking the OpenShift compatibility matrix, backing up etcd, and testing the upgrade in a staging environment. Perform upgrades using the OpenShift Command-Line Interface (CLI) and ensure high availability by performing a rolling upgrade. 4. Explain the concept of a pod in OpenShift. A pod is the smallest unit of deployment in OpenShift. It represents a group of containers that share a network namespace and are scheduled together. 5. What is a route in OpenShift, and how does it differ from a service? A route defines how external traffic is routed to services within a cluster. It acts as a virtual host for your applications. A service is a logical group of pods that provide the same functionality. 6. Explain the concept of a deployment configuration in OpenShift. A deployment configuration defines the desired state of an application, including the number of replicas, image, and resource requirements. It also handles rolling updates and scaling. 7. What is the role of a build configuration in OpenShift? A build configuration defines the process for building container images. It can be triggered by source code changes or scheduled events. 8. What is the difference between a stateful application and a stateless application in OpenShift? A stateful application stores data that persists across restarts or failures. Examples include databases and message queues. A stateless application doesn’t require persistent data and can be easily scaled horizontally. 9. How do you manage persistent storage in OpenShift? OpenShift provides options like Persistent Volumes (PVs) and Persistent Volume Claims (PVCs) to manage persistent storage for stateful applications. 10. What is Route in OpenShift Container Platform? You can use a route to host your application at a public URL(Uniform Resource Locators). Depending on the application’s network security setup, it can be secure or insecure. An HTTP(Hypertext Transfer Protocol)-based route is an unsecured route that provides a service on an unsecured application port and employs the fundamental HTTP routing protocol. SET – 3 1. What are Red Hat OpenShift Pipelines? Red Hat OpenShift Pipelines is a cloud-native continuous integration and delivery (CI/CD) system based on Kubernetes. It uses Tekton building components to automate deployments across several platforms, abstracting away the underlying implementation details. 2. Explain how Red Hat OpenShift Pipelines uses triggers. Create a full-featured CI/CD system with Triggers and Pipelines in which Kubernetes resources define the entire CI/CD process. Triggers capture and process external events, such as a Git pull request and extract key pieces of information. 3. What can OpenShift Virtualization do for you? The OpenShift Container Platform add-on OpenShift Virtualization allows you to execute and manage virtual machine workloads alongside container workloads. OpenShift Virtualization uses Kubernetes custom resources to introduce additional objects to your OpenShift Container Platform cluster to enable virtualisation jobs. 4. What is the use of admission plug-ins? Admission plug-ins can be used to control how the OpenShift Container Platform works. After being authenticated, admission plug-ins intercept resource requests submitted to the master API and are permitted to validate resource requests and ensure that scaling laws are obeyed. 5. What are OpenShift cartridges? OpenShift cartridges serve as hubs for application development. Along with a preconfigured environment, each cartridge has its own libraries,

Openshift Q&A Read More »

Installing VirtualBox on Windows

  To install VirtualBox on Windows , you must first download the appropriate installation file for your host. link:https://www.virtualbox.org/wiki/Downloads Download the windows host Double-click the file to launch the VirtualBox Setup wizard. Click Next on the first screen. This action tells the Wizard that you want to install VirtualBox.   Do the following actions on the Custom Setup screen. You’ll see a list of the features the Wizard will install. In this example, leave the default selection. Browse and select the location you want to install VirtualBox in. The default location is fine, but feel free to change it if you prefer. Click Next when you’re ready to continue. On the next screen, you’ll see a warning about networking. The setup process will install a virtual network adapter, which may cause your network connection to disconnect momentarily. Click Yes to continue. Finally, you’ll see a screen asking you to confirm the installation. Click Install to install VirtualBox on Windows The installation process takes several minutes, depending on your system speed. Click Finish to close the Wizard after the installation and start using VirtualBox.   After you install VirtualBox on Windows , your can create your first virtual machine. First, decide the OS which you want to install and download the iso image file. Incase if you want to setup Redhat VM, first create a Redhat personal account. link:https://developers.redhat.com/products/rhel/download select login–>Register–> select as personal account and provide all the mandatory fields and  select create account. link:https://access.redhat.com/downloads/content/rhel Login with the account created in the above portal and download the following iso image from the products section. Creating Your First Virtual Machine click the New button in the top-right corner of the VirtualBox window. This action brings up the Create Virtual Machine wizard. This Wizard lets you configure your new VM with the settings you want. Select the iso image which we downloaded and provide a name for your virtual machine. Click on next If required change the username and password and click next. Since its an unattended installation we can continue with the default values else you have to change the values depending upon your requirement.   Click on finish to set up the virtual machine. Select Redhat and then click on -> settings -> storage and click on controller:IDE select the iso and click ok   click on start to get into the virtual machine press enter or leave for automaticboot Select the language Under system select Installation Destination Under user settings select Root password and and then enter the password required and click on done at the top. Under user settings select Create user and and then enter the username and password required and click on done at the top. click on Begin installation. Login with the user account which you have created click on activities and terminal You can see the terminal page like this with the prompt, where we will be working on with the commands.  

Installing VirtualBox on Windows Read More »

Journey Back to Private Datacenter from Cloud | Dropbox

Vanakkam all In current world, companies are rushing towards switching their application from private datacenter(DC) to Cloud providers who provide various services including compute, networking, storage, security etc. The main reason for switching from DC to Cloud revolves around the DC cost, efficiency, scalability. But soon, will we be witnessing them migrating back from Cloud to Private Datacenter considering the unprecedented price hike, unused services, unused resources, confusion in service selection etc and also server manufacturers offering the hardware in smaller size, AI powered processors which occupies less space comparing to olden days. Example | Dropbox When we talk about moving back to DC due to unplanned cloud services usage and its effect on costing, there are several companies out there who have already moved back to their private DC or planning to move back as challenge to showcase that they can built an cost effective, efficient, planned DC on their own instead spending a huge budget on cloud Dropbox In a well-publicized move, Dropbox decided to shift away from Amazon Web Services (AWS) to its own custom-built infrastructure. This decision was primarily motivated by the need to control costs and improve performance, as managing their massive amounts of data on AWS was becoming increasingly expensive. “It was clear to us from the beginning that we’d have to build everything from scratch,” wroteDropbox infrastructure VP Akhil Gupta on his company blog in 2016, “since there’s nothing in the open source community that’s proven to work reliably at our scale. Few companies in the world have the same requirements for scale of storage as we do.” Its the backward approach. Now, Dropbox has its own advanced AI driven Datacenters across. Their strategy on building a Datacenter is interesting and amazing. They have come up with their own checklist, stages, planning in acquiring a place before Datacenter is being officially set. Interesting checklist | DC site selection process: Dropbox before it stages a DC, it involves in following process Site Selection Process Power Space Cooling Network Security Site Hazards Operations & Engineering Logistics Rental rate Utility rate Rental escalator Power usage effectiveness Supporting Infrastructure Design Expected cabinet weight with dimensions and expected quantity Increased risk due to construction delays Inadequate monitoring programs, which would not have provided the necessary facility alerts With above all selection process, the team comes up with a Score card. Based on the score, they decide the site location and then work on the DC setup. Large Vs Small DC space : The technology advancement is moving towards having small servers, small rack rack space and facility to easily upgrade the hardware or enhance the existing hardware. We have providers who can help in hardware upgrade lease agreements. Consult our CubenSquare Experts for Migration : Reach out to our experts for – Move back to Private Datacenter setup Compare existing Cloud pricing Vs DC setup and its pricing forecast We understand your application, customer base, thought process and provide Cloud/DC solution Cost optimization solution in existing Cloud Summary : Probably, in next 5 years, we can see several companies moving back to private datacenters from cloud considering the temptation of using services which they don’t need, excessive usage of resources, lack of knowledge in choosing the right service resulting in enormous price hike

Journey Back to Private Datacenter from Cloud | Dropbox Read More »

Ethiopia Agriculture and REDHAT Openshift AI

Vanakkam all – from ‘CubenSquare-PallikoodaM’ Ethiopia Agriculture & Red Hat OpenShift AI : Imagine a scenario of hashtag#redhatopenshiftai helping Ethiopia agriculture, by providing the farmers with tools and insights they need to optimise, increase productivity . On the other hand, I am just thinking on the investment to be made to make this happen. While we have different product options, it’s about the specific requirement, prediction quality, results, security, future scalable product across different industries, enterprise solution. 🇪🇹 About Ethiopia : Ethiopia, in the Horn of Africa, is a rugged, landlocked country split by the Great Rift Valley. With archaeological finds dating back more than 3 million years, it’s a place of ancient culture. Ethiopia is a country with a high rate of farming but low adoption of advanced technology – Mostly traditional. Ethiopia continues to face challenges in its agricultural sector, at the same time efforts are being made to improve productivity and sustainability. The key issues include 👉 Waterlogging 👉 Salinity 👉 Soil acidity 👉 Parasitic weeds 👉 Problems with irrigation scheduling What if ? What if we implement Red Hat Openshift AI to address above issues in Ethiopia agriculture. This product can process large datasets from sensors and drones deployed in agriculture fields to monitor soil condition, crop health, weather patterns. Machine Learning Models Leverage the Machine learning models to get a effective and quality results through Red Hat Openshift AI 👉 ARIMA – Auto-Regressive Integrated Moving Average 👉 LSTM – Long Short-Term Memory 👉 Random Forest – Ensemble Methods 👉 Convolutional Neural Networks – Image Recognition 👉 YOLO – You Only Look Once 👉 Isolation Forest 👉 XGBoost 👉 Q-Learning – Reinforcement Red Hat OpenShift AI : The data from sensors and drones are fed into Red Hat OpenShift AI . AI Models can predict weather patterns, early signs of pest infestations, optimal time for planting and harvesting Design a friendly mobile app to farmers to help feed all these data and which will make them take decisions accordingly and help in higher yields . Introducing such technologies would help farmers .

Ethiopia Agriculture and REDHAT Openshift AI Read More »

Journey Back to Private Datacenter from Cloud | Dropbox

Vanakkam all In current world, companies are rushing towards switching their application from private datacenter(DC) to Cloud providers who provide various services including compute, networking, storage, security etc. The main reason for switching from DC to Cloud revolves around the DC cost, efficiency, scalability. But soon, will we be witnessing them migrating back from Cloud to Private Datacenter considering the unprecedented price hike, unused services, unused resources, confusion in service selection etc and also server manufacturers offering the hardware in smaller size, AI powered processors which occupies less space comparing to olden days. Example | Dropbox When we talk about moving back to DC due to unplanned cloud services usage and its effect on costing, there are several companies out there who have already moved back to their private DC or planning to move back as challenge to showcase that they can built an cost effective, efficient, planned DC on their own instead spending a huge budget on cloud Dropbox In a well-publicized move, Dropbox decided to shift away from Amazon Web Services (AWS) to its own custom-built infrastructure. This decision was primarily motivated by the need to control costs and improve performance, as managing their massive amounts of data on AWS was becoming increasingly expensive. “It was clear to us from the beginning that we’d have to build everything from scratch,” wroteDropbox infrastructure VP Akhil Gupta on his company blog in 2016, “since there’s nothing in the open source community that’s proven to work reliably at our scale. Few companies in the world have the same requirements for scale of storage as we do.” Its the backward approach. Now, Dropbox has its own advanced AI driven Datacenters across. Their strategy on building a Datacenter is interesting and amazing. They have come up with their own checklist, stages, planning in acquiring a place before Datacenter is being officially set. Interesting checklist | DC site selection process: Dropbox before it stages a DC, it involves in following process Site Selection Process Power Space Cooling Network Security Site Hazards Operations & Engineering Logistics Rental rate Utility rate Rental escalator Power usage effectiveness Supporting Infrastructure Design Expected cabinet weight with dimensions and expected quantity Increased risk due to construction delays Inadequate monitoring programs, which would not have provided the necessary facility alerts With above all selection process, the team comes up with a Score card. Based on the score, they decide the site location and then work on the DC setup. Large Vs Small DC space : The technology advancement is moving towards having small servers, small rack rack space and facility to easily upgrade the hardware or enhance the existing hardware. We have providers who can help in hardware upgrade lease agreements. Consult our CubenSquare Experts for Migration : Reach out to our experts for – Move back to Private Datacenter setup Compare existing Cloud pricing Vs DC setup and its pricing forecast We understand your application, customer base, thought process and provide Cloud/DC solution Cost optimization solution in existing Cloud Summary : Probably, in next 5 years, we can see several companies moving back to private datacenters from cloud considering the temptation of using services which they don’t need, excessive usage of resources, lack of knowledge in choosing the right service resulting in enormous price hike

Journey Back to Private Datacenter from Cloud | Dropbox Read More »

Employment opportunities for Law graduates – Legal Landscape experiencing dynamic growth

Vanakkam all While IT industry , Food, Healthcare, Financial industries are seeing a drastic growth and innovation, one filed which isexperiencing dynamic growth across all these sector is Legal landscape. Several new law firms opening across the country. And the best part is it’s not about the quantity , but the diversity of legal services offered.  Few law firms to name : INDUSLAW is a multi-speciality Indian law firm, having offices in Bangalore, Delhi, Hyderabad, Chennai and Mumbai – cater to a wide range of legal services including corporate law, intellectual property, and litigation. Veritas Legal – Specializes in corporate law and has quickly made a name for itself, particularly in mergers and acquisitions, and private equity transactions. Algo Legal – Based in Bengaluru, this firm positions itself as focused on venture capital and emerging technologies sectors, integrating legal services with deep tech insights. Pioneer Legal – expertise in banking and finance, corporate law, and dispute resolution. Touchstone Partners – A firm that has come into prominence for its focus on investments, financings, and corporate advisory services, particularly in the tech and startup ecosystems. Positive way ahead : Its good to see several new law firms getting opened in India – Focus is on Sector specific services Increased Competition Job creation Support for startups and SMEs By the way , lets understand the key difference between Legal tech and Tech Law Tech Law is about understanding and navigating the laws related to technology. Examples: Advising a tech startup on compliance with data privacy laws. Handling patent infringement cases for technology products. Legal Tech is about using technology to improve the practice of law Examples: Implementing a case management system in a law firm to improve operational efficiency. Using AI for predictive analysis to forecast litigation outcomes We will explore more on new positions in Legal sector and their job roles & responsibilities

Employment opportunities for Law graduates – Legal Landscape experiencing dynamic growth Read More »

RedHat Openshift | Oil spill, Marine Ecosystem | Openshift Engineer Day to Day activities | Internship @CubenSquare

Vanakkam all Lets understand about oilspill, its impact and how #RedHat can help solve the problem What is oilspill : An oil spill in a marine ecosystem refers to the release of liquid petroleum hydrocarbons into the ocean or coastal waters. This event is often due to human activities such as offshore drilling accidents or tanker spills. Oil spills in 2023 : According to ITOPF, about 2,000 tones of oil was lost to the environment from tanker spills in 2023. In 2023, there were 10 oil spills of more than seven tones, including one in Asia where more than 700 tones of oil leaked Impact to Marine ecosystem : Oil spills have devastating effects on marine ecosystems. They can lead to the death of marine life, damage to habitats, and long-term ecological consequences. Oil coats marine species, impairing their ability to move, breathe, and feed, leading to increased mortality. Role of Private #5G, #AI, Drone, Camera in preventing : Private #5G networks enhance the reliability and speed of data transmission from monitoring devices. #AI can analyze data from these networks for early detection of spills and predictive maintenance. Drones and cameras offer real-time surveillance over vast marine areas, identifying spills quickly. Together, these technologies play a crucial role in preventing oil spills #redhatopenshift & Private #5G : Minsait, an Indra company, has taken the joint solution from #redhat and #Intel to market and has already seen the benefits in multiple edge- and AI-enabled use cases.#RedHatOpenShiftprovides a unified cloud-native platform for private 5G workloads. I have discussed about Openshift and Private 5G in earlier article in detail. Redhat Openshift Engineer #DaytoDayactivities : One of our student got placed overseas in #Redhat Openshift technology and his day to day activities involves Design and Engineering : Design cluster builds such as cluster size, node size, number of workers, number of infra nodes, type of storages, type of authentication to the cluster, type of load balancer to use, etc… Solutions : OCP is not a logging platform(meaning not ideal to store all logs inside the cluster), as per RedHat recommendation, all logs( audit logs, infra logs and application logs) should be stored outside the cluster. So as an OCP Engineer, you should design and put a solution in place to solve this. Cluster scaling, how can we handle the workloads in the cluster. Can we vertically scale or horizontally scale the cluster. How does it impact. Solutions around these Registry solutions. How do we store and manage docker images both for cluster and for applications. How do we maintain and manage projects/namespaces What kind of RBAC to be created and managed and maintained in the cluster for both administrators and for consumers Installing Operators in the cluster #RedhatOpenShift Engineer #Internship CubenSquare – Get an overview of the Project and client requirement – Get assigned to the project & track the progress – Hands-on experience including Openshift setup Application Deployment Namespace, Network Policy setup User,Group, Access control Enhance security Deploy ingress, secure route Monitoring & alerting Troubleshooting

RedHat Openshift | Oil spill, Marine Ecosystem | Openshift Engineer Day to Day activities | Internship @CubenSquare Read More »

The Digital Personal Data Protection Act, 2023 | Penalties | Redhat Openshift & Security | Google Alerts

Vanakkam all In current #ai world, how an individual can have control over to their personal data, how to understand the processing of their personal data by companies ? Inline with this, last year 11th August 2023 the following act of Parliment received the assent of the President – ‘THE DIGITAL PERSONAL DATA PROTECTION ACT, 2023’. An Act to provide for the processing of digital personal data in a manner that recognises both the right of individuals to protect their personal data and the need to process such personal data for lawful purposes and for matters connected therewith or incidental thereto. As per the DPDP Act : (t)“personal data” means any data about an individual who is identifiable by or in relation to such data; (u) “personal data breach” means any unauthorised processing of personal data or accidental disclosure, acquisition, sharing, use, alteration, destruction or loss of access to personal data, that compromises the confidentiality, integrity or availability of personal data Know your rights : Rights and duties of data principal: An individual whose data is being processed (data principal), will have the right to: (i) obtain information about processing, (ii) seek correction and erasure of personal data, (iii) nominate another person to exercise rights in the event of death or incapacity, and (iv) grievance redressal Real time example: This week, A Leading AI Law Expert questioned the website company about how his personal data are being processed after he signed in to the website. Due to his doubts after signing in, he demanded for the rights to access the data processing areas specifically to his personal data. The company immediately accepted and passed on the request to their IT team to provide the access. Probably due to his designation and popularity, immediate action was taken. As a common man, how many of us are aware of our rights? Time for every citizens to know their rights, how their personal data is being processed and right to question. Google Alerts : Setting up Google Alerts for your name or personal information is a straightforward yet effective strategy for monitoring your digital footprint. Google Alerts is a free service that notifies you via email whenever new results—such as web pages, newspaper articles, or blogs—appear in Google’s search results for the terms you specify. Try and post your comments if its really working and helps to prevent wrongful usage of your personal data – https://www.google.com/alerts Penalties as per the DPDP act : As per ‘The Schedule’ section, depending upon the severity of the breach , the penalty may vary between 10,000 to 50 crore. Redhat Openshift & Security : #RedhatOpenshift, a leading enterprise #Kubernetes platform, offers several features and capabilities that can be leveraged to enhance personal data protection, especially in the context of organizations managing and processing personal data in compliance with regulations like the Personal Data Protection Bill (PDPB) in India. 1. Data Security and Encryption – Encrypt data at rest and in transit 2. Access Control and Authentication (RBAC) 3. Network Policies (Allow, deny network traffic) 4. Container Image Security (Scanning images) 5. Automated Compliance Policies (Policy Management Tools) While OpenShift provides the technical capabilities to support personal data protection, successful implementation also depends on how organizations configure and use these features. Proper configuration of security settings, proper management of access controls, regular auditing, and adherence to best practices in application development are essential to fully leverage OpenShift’s capabilities for data protection. Whats next : Lets discuss about the Redhat Openshift features above with an example and technical nuances.

The Digital Personal Data Protection Act, 2023 | Penalties | Redhat Openshift & Security | Google Alerts Read More »

Will AI replace lawyers ? Pending cases until December 2023 | Automate routine tasks

Vanakkam All, Its not just IT engineers, its spread across legal profession too. Just like coaches analyze past games to anticipate a player’s next move, lawyers could leverage historical verdicts, arguments, statements, and client information for predictive insights using AI. This approach could be beneficial, enhancing strategy formulation and outcome prediction, but it raises ethical and privacy concerns that must be navigated carefully. No AI can replace a Judge : What if we consult an AI for drafting opening and closing arguments in a case? The real competition will be between a human and AI in court, with the key role belonging to the honorable judge. No AI can replace a judge. Are lawyers concerned that AI is going to take their jobs? AI isn’t just for technical fields, it’s making its way into the legal profession too. The question arises: Can AI replace lawyers? What if AI is utilized for dispute resolution, with Lok Adalats driven by data from AI, ensuring impartiality? Similar to the automation seen in IT, AI holds the potential to significantly alter legal work processes. It could automate routine tasks like document review and drafting with applicable legal sections, marking a transformation in the legal field. Time for legal team to look at adopting AI: Current capabilities of AI cannot match the judgement, ethical considerations, a statement keeping in mind of people’s sentiment, client counselling that lawyers provide. AI can serve as an assistant rather than a replacement, but is this always the case? With advancements, many large law firms are exploring AI tools to automate tasks traditionally handled by junior staff, like case analysis and summarization. For instance, if AI could prepare and file case arguments in days instead of weeks, what would be the impact on lawyers accustomed to such tasks? This situation raises questions about job security and the need for upskilling, potentially changing the nature of discussions among lawyers during their break time in cafeteria or parties or get together. By using AI, lawyers can focus on the strategy, client interaction and look for possible ways to lead better legal outcomes and client satisfaction. The phrase “I will see you in court” might evolve into “I will see you in AI-court,” leading us to ponder the direction our world is taking. Does this thought reflect incompetence or genuine concern for humanity’s future? Definitely, this transformation in legal discourse could spark varied discussions among lawyers about the integration of AI in their field and its implications on their profession and daily conversations. Pending cases in India : According to Hindustan Times, by the end of December 2023, there were 5 crore pending cases in India’s courts, including 80,000 in the Supreme Court and 61 lakh in the 25 high courts. Integrating AI into the judicial system could potentially expedite case processing and help reduce the backlog. Conversations and debates are required : We need more and more conversations and debates around AI into judicial system , pros and cons, how to adopt, what to adopt, what not to adopt etc. We cant hide anymore or ignore AI. We are in AI world and its not the future rather its the present. Immediate next : To protect client confidentiality when using AI in the judiciary, it’s crucial to implement robust data protection measures. This includes encrypting sensitive information, using secure AI training datasets that exclude personal data, and ensuring AI systems comply with privacy laws.

Will AI replace lawyers ? Pending cases until December 2023 | Automate routine tasks Read More »

Time for ‘Lok Adalat for AI in India’ | Judiciary should create AI-related job positions

Vanakkam all With the rise in #ai utilization, the judiciary is likely to encounter numerous AI-related cases. While the government works on AI act, I believe establishing a Lok Adalat specifically for AI could efficiently resolve many such disputes. What is Lok Adalat : Lok Adalat refers to People’s court. Lok Adalat is a system of alternative dispute resolution. Lok Adalat’s settle disputes through conciliation and compromise, offering a quick, cost-effective, and binding resolution. The system is recognized under the Legal Services Authorities Act, 1987, and has the authority to settle a broad range of civil cases and compoundable criminal cases. No appeal, No Fee Decisions made by Lok Adalat’s are final and cannot be appealed, encouraging the parties to willingly resolve their disputes. Also, No court fee payable Composition 1 Chairman : Must be a sitting or retired judicial officer 2 Members : Should be a lawyer 1 Social worker First National Lok Adalat of 2024: March 9th Over 11.3 million cases settled in first National Lok Adalat of 2024. The National Legal Services Authority (NALSA) successfully organized the first National Lok Adalat of 2024 in the taluks, districts and high courts of 34 states and Union territories on Saturday. According to information from the state legal services authorities from across the country as of 6 pm on Saturday, 1,13,60,144 cases were settled Includes : 17,14,056 pending cases and 96,46,088 pre-litigation cases Approximate value of settlement : Rs 8,065.29 crore AI everywhere : Soon the world will visualize AI dominant lifestyle over human. While the innovations are helping several industries including medical science, it also instils fear on transparency, privacy, accountability. Flood of AI cases : Soon India is going to face several cases around AI violations – bias and discrimination in decision-making, privacy invasion through extensive data collection, ethical concerns over AI decision-making in critical areas like healthcare and criminal justice, autonomous weapons etc. Lok Adalat for AI Today Artificial Intelligence act was passed in European parliament. At the same time, India is working on Artificial Intelligence Act and its nuances. While, the government is shaping out the act, having a Lok Adalat for AI would help to settle the disputes between individuals & corporates. Lok Adalat for AI to settle disputes would be an approach to handling conflicts that arise from AI operations, usage, and development. A framework that accommodates the complexities of AI technology, including issues of bias, transparency, and accountability. Compositions of Lok Adalat for AI : Involve a panel of AI experts, legal professionals, and ethicists to review cases and make decisions, potentially supported by AI tools to analyze data and predict outcomes. However, the implementation of such a system would need to carefully consider ethical guidelines, legal standards, and the limitations of AI in understanding human contexts. AI Job roles : Judicial system should start recruiting AI experts. AI experts will be responsible for analyzing the data, understanding the algorithm used and predict outcomes. Having right experts will help with speedy process.

Time for ‘Lok Adalat for AI in India’ | Judiciary should create AI-related job positions Read More »

Red Hat Openshift 4.14 – Key Enhancements and course content

Vanakkam all A brief on Red Hat Openshift 4.14 version features and course content Red Hat Openshift version : 4.14 Based on Kubernetes Version : 1.27 Container Engine CRI-O : 1.27 Extended Update Support(EUS): From OCP-4.12, additional 6 months added RHEL COREOS uses RHEL 9.2 Can install OCP on Oracle Cloud Infrastructure using assited installer Hosted control Planes on bare metal and Openshift virtualization Boost AI and Graphics Workloads with Red Hat OpenShift Enhanced Security Features Red Hat Openshift official training content includes with hands-on : Kustomize manifests Openshift templates Helm charts – Deploy applications using helm Authentication & Authorization Network security Load balancer services Developer self-service: quota,limitrange,project templates K8S Operators Security & updates Zoom Out : The introduction of hosted control planes and expanded virtualization support in 4.14 reflects a significant move towards offering more flexible, scalable, and cost-efficient deployment options for OpenShift clusters

Red Hat Openshift 4.14 – Key Enhancements and course content Read More »

Red Hat Openshift Cheat Sheet

Vanakkam all For students and corporate professionals who are preparing for exam and interview , refer these commands to start with and practice. More to come on commands related to different industry focus. Login to an OpenShift Cluster: oc login -u username -p password https://api.cluster.example.com:6443 List the common objects oc api-resources oc get nodes oc get pods oc get deployment oc get deploymentconfig oc get service oc get endpoint oc get route oc get replicaset oc get daemonset oc get namespace oc get quota oc get limits oc get projects oc get secrets oc get configmap oc get persistentvolume oc get persistentvolumeclaim oc get storageclass oc get job oc get cronjob oc get clusterrole oc get users oc get groups oc get serviceaccount oc get role oc get hpa oc get machineconfig List the common output options of an object oc get nodes -o wide oc get pods -o wide oc get deployment -o wide oc get service -o wide oc get pods <pod-name> -o yaml oc get pods <pod-name> -o json oc get deployment <deployment-name> -o yaml oc get service <service-name> -o yaml Edit options oc edit node <node-name> oc edit deployment <deployment-name> oc edit service <service-name> oc edit route <route-name> Troubleshoot oc logs <pod-name> oc get ep oc describe pod <pod-name> oc describe deployment <deployment-name> Help options oc create deployment –help oc create route –help oc create service –help oc create secret –help oc create configmap –help oc create serviceaccount –help oc adm policy –help oc scale –help oc autoscale –help List All Projects: oc get projects Switch Project: oc project my-project Create a New Application: oc new-app –image=quay.io/redhattraining/todo-angular:v1.1 Get Pods: oc get pods Watch Pods in Real Time: oc get pods -w Describe Pod: oc describe pod <pod_name> Execute Command in Pod: oc exec <pod_name> — df-k Logs & Debugging Get Logs for a Pod: oc get pods oc logs <pod_name> Follow Logs in Real Time: oc logs -f <pod_name> Debug a Pod: oc debug pod/<pod_name> Managing Resources Create Resource (e.g., Deployment, Service) from a File: oc create -f <filename.yaml> If you don’t know to write yaml , use this oc get deployment todo-angular -o yaml oc get deployment todo-angular -o yaml > mydeployment.yaml ls -lrt Get Deployments: oc get deployment Scale a Deployment: oc scale deployment todo-angular –replicas=3 Edit Resource: oc edit <resource_type>/<resource_name> Example : oc edit deployment/todo-angular Delete Resource: oc delete <resource_type> <resource_name> Example : oc delete deployment todo-angular oc get deployment Create a New Project: oc new-project <project_name> Example : oc new-project test Grant Role to User in Project: oc policy add-role-to-user <role> <username> -n <project_name> Example : oc policy add-role-to-user admin test-user -n test Get Users: oc get users Networking Expose Service Externally: oc get service oc expose svc/<service_name> Example : oc expose svc/todo-angular Get Route: oc get route Help oc help

Red Hat Openshift Cheat Sheet Read More »

Anticipate and detect wildfires, analyze data in real time : Red Hat Openshift, AI, Edge, Private 5G

Vanakkam all One of the key discussion in MWC24,Barcelona was about Minsait, an Indra company taking the joint solution from Red hat and Intel for use cases such as ‘Detection of wildfires’ which includes edge, Private 5G, AI, Redhat Openshift. Technologies such as the IoT or AI help with the intelligent detection of fires, minimize false positives and can generate early warning to accelerate the intervention of personnel. Minsait – ‘Spain – In 2022,summer time more than 73,000 affected hectares had burned, the area burned in Spain almost doubles the average of the last decade and the forecasts for the remainder of the heat do not seem to be better. An emergency context in which it is necessary to take measures and collaborate in a coordinated manner, both governments, companies and citizens, to minimize its consequences and establish future measures that contribute to having sustainable and more conservationist policies with the world in which we live’ From above statement, we understand the seriousness of wildfire, its impact to Human health, environment, Economy. AI, Private 5G, Edge computing: These 3 offers innovative solutions to enhance wildfire detection and prevention efforts. Early detection Predictive Analysis Monitoring High speed Dedicated Networks Remote operations : Drone, robotic firefighting equipment, real time data analytics, Scalability and efficiency Role of Red Hat Openshift Redhat Openshift, an enterprise kubernetes distribution, enables the packaging of AI models, data processing applications. Also helps in auto-scaling, High availably. Redhat Openshift tool stack : Tools that can be leveraged for this use case, Openshift Container Platform (OCP) v4.10,v4.12 Openshift AI Openshift pipeline : Tekton | Native CI/CD solution Openshift GitOps process : ArgoCD Openshift Service Mesh : Istio Advanced Cluster management for Kubernetes Redhat Quay : container registry Redhat Market Place Container Engine : CRI-O RHEL CoreOS : Operating system Different Environments to Deploy and test Sandbox Test Production Disaster Recovery (DR) Platform On-premises Cloud – AWS Summary : Red Hat Openshift helps enables an effective approach to managing wildfire risks, by providing a automated, scalable, and secure platform for deploying and managing the applications and services that includes AI, Private 5G, and Edge computing solution for wildfire detection and prevention.

Anticipate and detect wildfires, analyze data in real time : Red Hat Openshift, AI, Edge, Private 5G Read More »

Telecom : Red Hat Openshift supports NTT to provide Large-scale AI data analysis in real time

Vanakkam all NTT – Nippon Telegraph and Telephone Corporation, is a Japanese multinational information technology (IT) and communications corporation headquartered in Tokyo, Japan. AI is all about analyzing the data and producing outputs. But how fast can the rendering happen. For a large scale AI data analysis, that too in real time, how Red Hat is helping NTT – lets take a look As part of the Innovative Optical and Wireless Network (IOWN) initiative, NTT Corporation (NTT) and Red Hat, Inc., in collaboration with NVIDIA and Fujitsu, have jointly developed a solution to enhance and extend the potential for real-time artificial intelligence (AI) data analysis at the edge : MWC BARCELONA – February 26, 2024 As the volume of data from sensors and devices grows, processing this data efficiently becomes crucial. With all these, performing AI analysis at the network’s edge—where data is generated, helps in assessing input in real-time. When large data is processed with AI, it can be a slow process due to computational demands. Updating AI products result in integrating additional hardware and cost. With edge computing capabilities emerging in more remote locations, AI analysis can be placed closer to the sensors, reducing latency and increasing bandwidth. Hardware Accelerators Other than general-purpose CPU, hardware accelerators are specialized hardware components designed to perform specific task which requires high speed, intensive. Example – processing AI, ML, deep learning, data analytics. Graphical Processing Unit (GPU) are highly efficient at parallel processing tasks , well suited for AI and ML training, data analysis. Data Processing Unit (DPU) to accelerate networking, storage and other security tasks. Red Hat Openshift – Hardware Accelerators : Red Hat OpenShift, an enterprise Kubernetes platform, for deploying, running, and managing containers across different environments. OpenShift facilitates the integration of hardware accelerators into your Kubernetes clusters. Openshift provides mechanisms to schedule workloads on nodes equipped with these accelerators (GPU,DPU etc) ensuring that your AI/ML, applications can access the specialized computing resources they need ( nodeSelector ) OpenShift ensures that hardware accelerators are used efficiently OpenShift simplifies the deployment of applications that require hardware accelerators – (Operators) OpenShift abstracts the underlying infrastructure details, allowing developers to focus on building and scaling their applications without worrying about the specifics of the hardware Summary : Large-scale AI data analysis in real time, powered by Red Hat OpenShift, can support Kubernetes operators to minimize the complexity of implementing hardware-based accelerators (GPUs, DPUs, etc.), enabling improved flexibility and easier deployment across disaggregated sites, including remote data centers. As per Chris Wright, chief technology officers – “With Red Hat OpenShift, we can help NTT provide large-scale AI data analysis in real time and without limitations.”

Telecom : Red Hat Openshift supports NTT to provide Large-scale AI data analysis in real time Read More »

Wind Farm – Private 5G, Red Hat Openshift and the demand for certified Engineers

Vanakkam all I am from Kanyakumari, southern tip of India. We have India’s largest operational onshore windfarm in the place called Muppandal, Kanyakumari District. On the way home, we cross the landscape withlush green fields surrounded by mountains, goats, sheep, cow, birds, farmers walking brisk by the morning sun shine ( this generation is lacking that brisk and natural energy) and not just this – gentle breeze, early morning sunshine on your face, a backdrop of wind turbines with avg 80 feet height and the rotating gigantic blades. The wind farm developed by Tamilnadu Energy Development Agency. Often I just think about the birds safety crossing those windfarm radar, blades but have never thought about how technology can solve that problem. Private 5G As the name implies, Private 5G is a dedicated network that uses 5G technology to create a private network tailored to specific organization needs. Its exclusive to the organization that sets them up. This gives them more control over the network’s setup, management, security, access and performance. One quick example of deployment of private 5G network in an industrial environment would be Siemens Automotive Test Center. Key Features of Prviate 5G High Speed and Low Latency Enhanced Security Customizability and Control Improved Connectivity for IoT Devices Dedicated Resources What is the connection between Private 5G and Windfarm Windfarm- a group of wind turbines is used to produce electricity. They harness kinetic energy of wind and convert it into electrical energy through the rotation of blades connected to generators. Protected Birds : Wind turbines blades can pose threats to species that are protected by law. Birds can collide with rotating blades or disturbed by wind farms leading to decline in their population. Problem statement : The problem outlined in#MWC24 on Feb 28,2024 was ‘Possible collision of birds in wind farm to protect wildlife and prevent penalties’ Solution: #MWC24 Early detection of protected birds in wind farms to avoid the environmental impact. Redhat Openshift & Private 5G As per Kelly Switt – Global Head of Intelligence Edge, Red Hat, ‘Red Hat and Intel have collaborated to create a cloud and edge-native private 5G solution for industrial and cross vertical deployments that is cost-effective and easier to adopt. This enables manufacturers to more readily capitalize on the massive revenue opportunity presented by AI-enabled software-defined operations and factories’. In windfarm, Private 5G can be applied in : Real-time Data Analytics and Monitoring Remote Control and Automation Enhanced Safety and Security Drone Inspections Digital Twin Technology – Virtual representation of windfarm IoT Integration By leveraging the capabilities and niche features of 5G, wind farm operators can achieve higher efficiency, enhanced safety and lower operational costs. Private 5G applications can be deployed on Redhat Openshift which provides a unified cloud-native platform. Openshift can be beneficial for Simplified Network Functions Virtualization (NFV), Edge computing, Automation and Orchestration, Security and compliance. Whats the AI role in this – Usecase Utilizinghigh-resolution cameras around wind farms, AI algorithms can continuously monitor skies for bird activity and these algorithms are trained to identify protected bird species from video feeds in real time. By recognizing specific species, especially those that are protected or at risk, AI can provide immediate alerts when such birds are detected near the turbines. When a protected bird or its flock is detected, there are two ways to prevent the collision . 1- Generate a acoustic sound which can help divert the bird flying direction, 2-Slow down the blade rotation automatically. Private 5G networks ensure that the vast amounts of data collected by cameras and microphones are transmitted with minimal latency to the AI processing units. This helps in immediate actions and avoid any collision. AI system can also learn from every incident and improves the accuracy and effectiveness overtime. This also helps the windfarm operators to be regulatory compliant. Demand for Redhat Openshift Engineers : All these innovative solutions with Redhat Openshift is inturn creating a huge job market for certified Redhat Openshift Engineers. Learn the technology, understand the nuances, were it is being used and how, and finally get placed in reputed organizations and enjoy the journey of innovation.

Wind Farm – Private 5G, Red Hat Openshift and the demand for certified Engineers Read More »

Telecom | Why Red Hat is scoring and will score in field of 5G

Vanakkam all Telecom industry across the globe is at its peak to revolutionize 5G network. Redhat Openshift plays a vital role in providing infrastructure and platform capabilities necessary for deploying and managing network functions and applications that run on top of 5G networks. This has led the industry to transform its technology and resulted in drastic increase in Redhat Openshift Engineers Job market. Lets get to the bottom | A different angle What is 5G Services offered in 5G Redhat Openshift and eMBB | 5G Interview Focus area What is 5G Live streaming OTT platforms VR gaming Education Simulations Sensors Soil and crop monitoring Automated irrigation systems Drone-based surveillance High speed internet Manufacturing Healthcare Retail Transportation Real-time remote medical consultations and surgeries Real time control of heavy machinery in mines or offshore platforms All of us are aware of above services and all those are enabled by 5G. 5th Generation of wireless technology for digital cellular networks. Quick example : We are watching live streaming, movies, series without any buffer – thanks to 5G. Main features : Higher data speed : upto 10Gbps Lower Latency Increase connectivity Improve Network Efficiency Wider coverage Services offered in 5G : Hope below details will ring a bell while comparing to the usage of services Enhanced Mobile Broadband (eMBB) – Faster internet speed to smartphones, Streaming HD video, Gaming, VR Ultra-Reliable Low-Latency Communications (URLLC) – Self driving cars, Infrastructure, real-time road-traffic efficiency, remote surgery Massive Machine Type Communications (mMTC): – Support large number of IoT devices : Smart lighting, traffic mgmt, env monitoring, sensors and drone to monitor crop health, soil moisture, automated watering and harvesting Fixed Wireless Access (FWA): – Wireless access with higher speed, reliable Industrial IoT: Improving tracking, monitoring of goods in transit – real-time data on location and delivery times Healthcare: – Telemedicine, remote patient monitoring, high quality video consultations Entertainment and Media: – Multi angle video streaming, VR Redhat Openshift and eMBB | 5G : Enhanced Mobile Broadband (eMBB) Remote education learning , media companies leverage 5G for streaming high definition videos, 360 degree videos to users, live sports, concerts and events Now, all above mentioned services are applications written in some languages, build, deployed on platforms. All these applications go through several updates, scaling based on user demand. Redhat Openshift – Enterprise kubernetes distribution can help in deployment, management of these applications by offering a robust, scalable and flexible container orchestration platform. Openshift is supporting eMBB in below aspects Scalability Edge computing CICD NFV (Network Function Virtualization) Containerization and Microservices Interview Focus area : These objects are essential for container orchestration, networking, storage, security, and more. While you prepare for a telecom company interview, make sure you focus on below areas too along with troubleshooting scenario. PODs Deployment Services Ingress Secrets Configmap PV PVC NetworkPolicy HPA Operators Statefulset

Telecom | Why Red Hat is scoring and will score in field of 5G Read More »

Why industries choose Redhat Openshift, over opensource Kubernetes

Vanakkam all Often there is a question around Job opportunities for Openshift Engineers. Will we get Job offers with good package if we learn Redhat Openshift. Here’s your answer . Lets consider 2 major industries – Financial & Telecom. Both of these industries are heavily reliable on Enterprise Kubernetes distribution – Openshift. Why ? Financial Industry : Reason for choosing Redhat Openshift can be categorized as follows from Financial industry perspective Redhat Enterprise Support to maintain SLA Regulatory compliance Security | Protect sensitive financial data Transaction system – Availability Faster deployment | New features to market quickly Regulatory compliance – The wand Regulatory compliance is one of the most crucial part of adherence being monitored by government bodies, regulatory agencies and financial authorities. Financial institutions must follow the law, regulations and guidelines to their business operations. Deviating from those rules, regulations will result in huge penalties, legal actions, suspension of the body etc. Hence the technology behind to protect such regulatory compliance must be strong and trusted. Redhat Openshift – the enterprise kubernetes distribution is one among the popular trusted technology to have therobust security features and enterprise support. Security features like RBAC, Network Policy , Security Context Constraint ( SCC) help in maintaining those compliance. Built-in logging and monitoring features help in audit requirements and trail. Transaction system – High availability and scalability All transactions in financial systems are considered to be crucial, unbreakable, traceable transactions. Openshift’s auto-scaling and self-healing features ensures availability, scalability under heavy loads Enterprise Support Above all, the enterprise support bound to SLA helps every organization to relax in times of issues, outages. Redhat expert team is available round the clock to support and fix the issues. Telecom Industry Similar to Financial industry, telecom industry reasons for choosing Openshift can be categorized as Network Functions Virtualizations | High reliability, Low latency Edge computing Telecom companies spread out across the geographical areas. Its like where ever you go, they follow. Such business requirement needs managing virtual network functions (VNF) across a wide geographical area, with high reliability and low latency. Openshift can orchestrate these VNFs efficiently. This features helps the telecom company to meet their SLA with end customers Edge computing : Lets say I am in chennaiand I would be happy to have my data received from near by locations instead of other states & region. To solve this problem and to have the data transmitted from customer nearest location, edge computing is implemented. Openshift facilitates deployment and management of applications across edge locations consistently. Summarize: Redhat OpenShift offers a more secure, manageable, and supported environment to these industries’ critical workloads and industries are happy with the product and service as they meet their end customer SLA and meet the security standards. When the Openshift product is in demand, of course the job opportunity for openshift role is also on demand.

Why industries choose Redhat Openshift, over opensource Kubernetes Read More »

Telecom Industry | Red Hat Openshift | Interview Prep Focus Area

Vanakkam all Its not just one area were we need the digitization in telecom industry and its not just one tool which offers the solution rather its a combination of multiple tools and integrations. Lets focus on the application side and the dots connecting to Redhat Openshift. Customer Engagement application : Telecom operators enhance their customer experience by providing a personalized dashboard to manage their accounts, billing, streaming options, data usage, customer support etc. And along with this, we also have the option of self-healing which helps in resolving the L1 issues automatically by enabling chatbots. Such customer engagement applications with digitized experience are in demand and they are getting containerized for better scalability and flawless application updates Amdocs DigitalONE : Telecom operators use DigitalOne to enhance their customer engagement. Amdocs is a leading software and services provider to communications and media companies, transforming the customer experience with its innovative solutions. DigitalONE is a part of Amdocs’ customer experience suite, designed to provide a digital customer engagement platform. Redhat Openshift Integration : Such customer engagement software run as containerized application on Redhat Openshift. Benefits include : Scalability Resilience CICD Security Interview Prep Focus Area : Now, the question is – how does this help me to prepare for an interview, what are the areas to be focused w.r.t Redhat Openshift. Lets assume below flow : Customer engagement application : Java Build the java code : Maven Build the war/jar file to container image : Podman Registry : Quay.io Deploy : Redhat Openshift Environments : Lab, Test, Prod, DR Clusters : 4 Clusters | 1 Master and 4 workers nodes | For prod 3 Master & 7 worker nodes SCM : Github Build tool : Podman Orchestration tool : Redhat Openshift version 4.10 / 4.12 OpenShift GitOps : Argo CD CI : Jenkins [ Build – Test – Repo ] Platform : On-prem & AWS Key Openshift Objects to practice : Node Pod Deployment Service Replicaset Daemonset Route Namespace | Project Resource quota LimitRange Network Policy Secret Configmap Identity Provider Serviceaccount RBAC SCC HPA Operator PersistentVolume PersistentVolumeClaim StorageClass Machineconfig In upcoming blog, will cover another telecom operator utilizing Redhat tool for solving different problem.

Telecom Industry | Red Hat Openshift | Interview Prep Focus Area Read More »

Hubbl | Comcast | Entertainment OS | Work Life in Comcast

Vanakkam all Hubbl– Trending news in Australia now. New device to manage your streaming apps by Foxtel. The reason why I am happy to read about this is, Hubbl is powered by Comcast’s Entertainment OS. Being an x-comcast employee, happy to read the products, projects being launched by Comcast and its partners every now and then. Hubbl : I was just browsing through hubbl and every media, newsletters talks about the device and the launch of the device this week. Hubbl is based on Comcast’s xumo Play streaming platform and Foxtel being a syndication partner would utilize Comcast’s technology to offer its streaming service. Hubbl revealed its 18 app partners including Netflix, Disney+, Prime Video, YouTube, Apple TV+ and Paramount+. Two different devices : Hubbl comes in two forms. 1. Hubbl puck (like a Jio device) 2. Installed inside the Hubbl Glass TV Patrick Delany, CEO, Hubbl and Foxtel Group : “Hubbl is like nothing in the market – ‘it is TV and streaming made easy’ – seamlessly integrating world-leading technology with a purpose-built design and unrivaled app integration that sets it well ahead of the curve. It has been built with Australian consumers in mind, effortlessly fusing free and paid entertainment and sport from apps, channels and the internet into one seamless user experience – delivered via Hubbl Hub or a world leading TV, Hubbl Glass. It will deliver a frictionless paid and free entertainment environment, and we believe will become the heart of the home for millions of Australians.” Comcast’s Entertainment OS: Tech listeners : When we think about OS, we just come across different Linux distros, windows, mobile OS etc. Have you heard of Entertainment OS ? Comcast’s Entertainment OS is the foundation of Comcast and Sky’s experiences for viewers. It’s a next-generation customer experience (CX) that seamlessly integrates across various devices. Entertainment OS aims to unify content from live TV, on-demand, streaming services, and other digital media into a single, easy-to-navigate interface. Includes features like : A new way to manage playlists A faster way to restart TV programs or films A new “continue watching” rail for Netflix content Personal Playlists A new “Play” voice command More ways to find content from your favorite actors and directors Enhanced Bluetooth features Life in Comcast – CIEC | x-Comcast Employee Being an x-comcast employee, it feels happy to see new products getting released and comcast crossing several milestones. Got to know from my friends and Linkedin posts that, the hiring is in full swing and new floors in ChennaiOne are getting ready for accommodating new hires. I enjoyed each and every moment in Comcast for the duration I worked there. In my overall 17 years tenure, I felt so bonded to the company Work-life balance Employee centric Individual care and attention Supportive leaders who will listen, guide, mentor Meet the company GM directly and discuss your concerns if any , share your opinion, etc Good salary package Women Empowerment | Effect DE&I Job role to match your skills Internal Job Postings All above are not just a namesake features, rather the real time perks you enjoy being a comcast employee. Job seekers – find the openings in Linkedin and give a try to Comcast , Chennai . CubenSquare – Oppostie to ChennaiOne : My office CubenSquare is opposite to ChennaiOne, were comcast office is located. Every day, I have to pass by ChennaiOne to reach my office which gives back the old memories. What about AI in entertainment industries : Lets explore in upcoming blogs . Also just wondering if OpenAI or Gemini or any other would be releasing a product around OTT/ Livestreaming / etc. As we speak, OpenAI image generation, OpenAI sora – the text to video conversion is amazing and what about the job roles with adobe photoshop, movie editors etc. Will there be layoff ? Will small players think about OpenAI instead of hiring an Video engineer ? We need to wait and find out … Adopt to the changes, think big and plan ahead …

Hubbl | Comcast | Entertainment OS | Work Life in Comcast Read More »

AI and its impact on copyright law | OpenAI – SORA

Vanakkam all Talk of the world – AI. Each one of us are amazed, thrilled to see how AI can do magics, how it can be used in our day to day life, how it can be adopted, at the same time worried about privacy, security features, how it might impact the human jobs. From employer perspective, how to make sure company confidential information’s are not fed for machine learning by employees. AI products now a days has a way to enable and disable ‘train the machine’ AI impact on Copyright: While I am enjoying the day to day news on AI and its results, I am also keen to know about its impact on copyright law. Copyright is all about legal right of a owner of intellectual property, whether the owner has given the consent to copy his creation. Awaiting the enactment of AI act in EU which has been already proposed and had the positive nod on implementing AI act, from 27 countries. Differentiate human creation vs AI creation: We do not have a technology to find out the origin of creation – picture, story, music etc. OpenAI is currently working on an internal testing to know the origin. We need to wait for it OpenAI – Copyright Complaints: OpenAI states that” If you believe that your intellectual property rights have been infringed, please send notice to the address below or fill out this form. We may delete or disable Content alleged to be infringing and may terminate accounts of repeat infringers”. While OpenAI researches and AI products at its peak, it is the responsibility of the individuals who use the products to use it with safety and use it responsibly. OpenAI Sora OpenAI release on ‘SORA’. What an announcement …. Sora creates video based on your text input . Stunning videos upto one minute maintaining the visual quality and adherence to user’s prompt. Safety & Risk analysis Sora is available to redteamers to assess areas for harms or risks Awaiting to use different emerging AI tools and analyze the risk associated with it

AI and its impact on copyright law | OpenAI – SORA Read More »

Signed our next project and unveiling our new website

Vanakkam all Yesterday, we were able to bag our next project – Vessel management software solution. Multiple meetings, proposals, demo etc helped to bag the huge project. Much more to come on this topic – will pause this while we work on the product. AND – https://cubensquare.com Exciting times ahead as we proudly unveil our new website. Join us in this new journey – where we explore and provide solutions to your IT problems PROPOSAL INLINE :After successful demonstration of ‘End to End Jira software and Service Management solution’ to one of the fintech company, we are awaiting the final signoff of the proposal . Fingers crossed ..

Signed our next project and unveiling our new website Read More »

Rolling update – Kubernetes

Vanakkam all This writing is about Rolling update commands in Kubernetes and the drawbacks to be evaluated before using this deployment strategy Kubernetes : Rolling update is one of the deployment strategy being used in Kubernetes where you need to update your application to a new version without causing downtime. Without causing downtime is the key to business users. Rolling update scenarios : Zero downtime application deployments Software updates Configuration changes – environment variables, resource limits Scaling changes – replicas, resource allocations Rolling update implementation: Can be implemented either through‘kubectl’ command line or YAML file Application deployment : Before performing rolling update, lets create a deployment kubectl create deployment mydeployment –image=nginx:1.15 –replicas=2 kubectl get deployment kubectl get pods Option 1– Update the application using YAML file method : vi mydeployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: mydeployment spec: replicas: 2 selector: matchLabels: app: mydeployment template: metadata: labels: app: mydeployment spec: containers: – name: nginx-container image: nginx:1.16 kubectl apply -f mydeployment.yaml Option 2 – Update application using ‘kubectl’ CLI kubectl get deployment kubectl set image –help kubectl set image deployment/mydeployment nginx-container=docker.io/nginx:1.16 Monitoring the Rolling update : kubectl rollout status deployment/mydeployment Practical Drawbacks: Rolling update is one of the awesome feature in Kubernetes, as we can update applications without causing downtime. At the same time, one must be mindful of the situation, environment while we use rolling update deployment strategy option, as this has drawbacks too Increased deployment time Larger deployments take more time to completed compared to bluegreen deployment strategy Increased resource Consumption During rolling update, old and new version to be maintained at one stage and during that time, the resource consumption will be high. Stateful application Rolling updates may not guarantee zero downtime Dependencies Incase if your application has dependencies, choosing rolling update should have proper considerations Incompatibility issues In some cases, rolling update might expose incompatibility issues between older and newer version

Rolling update – Kubernetes Read More »

AI Robot & Dentistry | My interview with Dr. Mohammed Ashiq, Tlieta Dental House

Vanakkam all While all of us are exploring AI in different ways, I just thought of understanding and experiencing the AI impact in healthcare. Today I got a chance to sneak-peak into ‘TLIETA DENTAL HOUSE’ . Dr.Mohammed Ashiq, chief Dentist shared his insights on the intersection of dentistry and AI. Artificial Intelligence is making strides in revolutionizing dental practices, offering new possibilities for diagnostics, treatment planning, and patient care Doctor’s passion towards embracing technology: In his busy schedule, he took some time to give a brief on ‘AI Robot’ and importance of Doctor consultation AI Robot Machine: How it works He has a ‘AI Robot’ machine in his front office . 6 feet height and 4 feet width with a touch screen, movable camera attached to the device. Doctor asked me to take a demo. Entered my phone number and name Click on Start The screen displays the stencil view on how to show your teeth Open your mouth and show your teeth infront of the camera – Guided phases | scan’s upper, lower, front of the teeth Once done, the screen will display the preliminary diagnostic report with possible treatment options A report copy to mobile as well AI in Dental Practice: Dr.Mohammed talked about the significance of AI Robot, which aid in the early detection of oral health issues. Just stand infront of the machine, scan your teeth as per the guidance from machine, in few seconds you will get the report. The report will give the details with treatment options but always get the doctor consultation for proper treatment. Doctor consultation mandatory: AI helps in quicker preliminary check with treatment options but cannot be considered as the final rather consult the doctor with the report for further treatment “Patients appreciate the personalized approach, and it instills confidence in the treatment process” My view on AI in Dentistry: While we appreciate the holistic view, still we have to address below challenges and concerns. Privacy Confidentiality of patient data Data security Ethical consideration AI is a powerful tool, but it cannot replace the intuition and experience of a skilled dentist. Through AI the doctors can avoid the routine tasks and focus more on the treatments. Of course, sooner we will be finding the answer to all above queries and put AI for better and secure use. AI and human expertise work hand in hand and ultimately delivering superior oral healthcare

AI Robot & Dentistry | My interview with Dr. Mohammed Ashiq, Tlieta Dental House Read More »

Life in CubenSquare as an Intern – DataScience AI/ML

Vanakkam all While I was working in corporate, we used to interview, select, train the interns and create an environment for them to have a easy, smooth transition from college life to corporate life & handling the real time project execution. This experience is helping me to welcome interns to CubenSquare, understand their expectation, defining them the roadmap, creating a overall joyful learning experience and excel in the domain. Different stages: Welcomes and introduction Orientation Existing skill Evaluation Feedback Setting expectations and goal Communication channel Training Project topics discussion and finalize Mentorship and Guidance Project execution and Progress Check-ins Track the progress through tools Provide necessary resources Discuss professional development opportunities Presentation and Documentation by Interns Project completion and demo Feedback mechanism Potential Next steps Join support channels to know the Job Opportunities Project focus on different industries : Currently we are focusing on below industries considering our potential customer base and upcoming contracts. At the same time, we welcome ideas and suggestions from students, discuss and proceed. IT Tourism Marine Finance Tools selection : I would say, the choice of tools often depends on the specific requirements of a project, the preferences of the data scientist or analyst, and the nature of the data being worked with. We have several tools to choose but can limit to the requirements Our choice of Project – AI enabled Tourism : Tourism is one of the popular topic being discussed across India due to various historic events happening currently. So we thought of taking up this as one of the project and currently working on it. Features to Implement : Mobile application User Profiling Recommendation Engine Real-time updates Multi-criteria optimization Sentiment analysis Technologies: ML NLP Real-time data integration: APIs Mobile app development: React , Flutter Invite for recruiters: A request to all recruiters of different companies – If you have a requirement in this domain, do give an opportunity to our interns and evaluate their skills. We assure you skilled resources and talents

Life in CubenSquare as an Intern – DataScience AI/ML Read More »

‘AI in Finance’ session at Women’s Christian College

Vanakkam all I have been to multiple colleges across Tamilnadu and other states to deliver technical sessions and current IT trend, job market etc. This January, got invited to Women’s Christian college, Chennai to deliver session on AI in Finance industry. Audience : Commerce graduates. While in college, I understood that irrespective of the departments, be it Finance or civil, all students are focused on their core area and also concentrating on evolution of information technology. AI because of us: While I was working in corporates – Inside the company : when ever a new technology emerges and its implementation, the whole teams discussion would be around – will we loose the jobs due to automation, AI etc Outside the company: As a consumer, we tend to get frustrated due to delay in getting bank statement, report, approval of loan application, government related tasks getting delayed – voter id, ration card, aadhar etc, queue in theatre, queue in restaurant etc etc etc. For everything we get frustrated and pray for a quicker solution. Our frustration is a business opportunity for another That’s were all these started. Continuous improvements, innovations, Artificial intelligence, Machine learning etc are leading towards ‘one click’ for our needs. Be it food delivery, bank transactions, movie booking, flight booking, government tasks etc. Everything on your finger tips. AI in Finance AI is changing the whole system. Speed is the key. AI is changing the quality of products and services the banking industry offers. Cost effective solutions Chatbots Fraud detection & prevention Customer Relationship Management Credit Risk Assessment Predictive Analytics Regulatory Requirements Competition Red Hat Openshift Data science : Red Hat Openshift Data science is now named as Red Hat Openshift AI. Its a scalable Machine Learning platform with tools to build, deploy and manage AI-enabled applications. Try Red Hat Openshift AI in your environment and check its pace

‘AI in Finance’ session at Women’s Christian College Read More »

No AI plz | Enjoy the festivals, traditions with human touch | And the business

Vanakkam all I am from a small village ‘Kottaram’ , less than 5 mins away from Kanyakumari – the tip of India . Pongal is celebrated every year in the month of January- a multi-day harvest festival . I along with my family travel to native for celebrating Pongal in traditional way every year. One of the major and enjoyable task for all ladies are to draw kolam ( lines, curves, loops drawn around grid pattern of dots ) the day before Pongal . The whole village will be drawing kolam in front of their houses – they start by 10 pm and go on until approx 2 to 3 am . No technology can match this experience of whole night drawing – an enjoyable experience, slow and steady efforts to make the kolam so colorful , no awards for best kolam rather a wonderful complement from neighbors, family members for the effort and drawing . Artificial intelligence is all about reducing time and make customer experience better. But while it’s possible to bring AI everywhere , let’s not bring AI in this and several other festivals, traditional ways we follow . Let’s leave it as it is and transfer them to our next generations too and let them enjoy what we did . This is not just for Pongal festive rather to all other celebrations where human touch matters . My x-boss from comcast Ernie Biancarelli often to quote this – ‘Leave it better than you found it ‘ Happy Pongal to all of you . I am writing this by 2:45 am ist and waiting for completion of kolam . Started by 10 pm ist and still ON . Slow – steady – enjoy – leg pain – cool breeze – coffee – colourful kolam 🎉 And the business :Meanwhile, I got a chance to meet one of my college mate in native who have come to India yesterday. He wants me/CubenSquare to built a stock market application which involves AI and ML. Sugarcane stick in one hand and AI/ML discussion on the other hand. We have initiated the discussion and by mid of next month, we should be getting started to develop the application. Will share my experience on the AIML implementation

No AI plz | Enjoy the festivals, traditions with human touch | And the business Read More »

Redhat Openshift Interview Questions – The 10 : Part 2

Vanakkam all, In this series, sharing the interview questions asked to our students in various MNCs , startup, product based, service based companies. The second 10 questions for your eyes below – Will get complex questions with scenario as and when we move on with parts Openshift Interview Questions – The Ten : Part 2 What are the aspects to be considered while upgrading openshift? known issues Monitoring & alerting – Have you used native features or external – Brief Do we need Grafana – if so how many dashboard do you have in your environment and detail them What identity provider is implemented in your environment ? What is your role in it What is RBAC – how do you get onboarding requests and who manages it Difference between namespace and project – have you used projects in openshift Why did your client chose Openshift over kubernetes Autoscaling is not working – the pods are not getting created – what will do you ? POD or application instance is in pending status – what are the error possiblities Alertmanager is not sending notifications to slack – what might be the issue

Redhat Openshift Interview Questions – The 10 : Part 2 Read More »

My first interview with a lorry driver : Hit & Run law

Vanakkam all , This is my first direct interview – non technical| Law. IT engineers should also be aware of this. Punishment & fine for causing death by accident. Sharing my experience :Yesterday went to Tirupati. Usually I stop at a small Rajasthani dhaba near Tirupati for food – roti, bindi fry, baingan fry, channa, daal tadka, mushroom masala, mushroom rice etc are being served hot, spicy and tasty. Saw few trucks parked and the drivers were standing beside . Thought of checking on the recent hit & run law and its impact from lorry drivers view. Hit & Run Law : One of the hot topic being discussed in India is regarding the Hit & Run law – IPC to BNS which states that “whoever causes the death of any person by rash and negligent driving of vehicle not amounting to culpable homicide, and escapes without reporting it to a police officer or a magistrate soon after the incident, shall be punished with imprisonment of either description of a term which may extend to ten years, and shall also be liable to fine” Point to note | Illustration:Assume that ‘A’ causes death of B by rash and negligent driving. After accident if ‘A’ flew from the scene without reporting to Police or magistrate, then the punishment will be 10 years imprisonment and 7 lakhs fine. Intention behind the new law: The act aims to improve the road safety in the country. Lorry driver view: I asked about their perspective and why are they opposing this new law. He told that – Driving lorry with heavy loads are not easy. Drivers need to see all corners every time – People cross the roads behind the lorry and due to negligence they get hit at the tail – Few pelt stones on the mirror to forcefully stop the lorry for robbery – Bribe – In case of death due to accident, drivers cant stay back in the scene, as there are high chances of public getting agitated and causing griveous hurt to lorry drivers – Lorry drivers salaries are very low including the daily bata. In this situation, punishing with 7 lakhs fine is practically impossible for drivers to settle Summary : The new law is strict, improves the road safety, drivers will have no choice rather to call police to inform about the accident and try to save the victim. Punishment is at the higher scale and we need wait until parliamentary decision is being made

My first interview with a lorry driver : Hit & Run law Read More »

Redhat Courses

Vanakkam all We have got few seats left with year end offers – Pay for Redhat Exam only & get Free Training & Lab. What is Redhat Linux Linux distribution developed by Redhat [ Operating System ] Course code : RH199 Exam Code : EX200 [2 attempts] Hands-on Practice : Redhat Lab What is Redhat Ansible Automation Platform, Infrastructure As A Code Course Code : RH294 Exam Code : EX294 [2 attempts] Hands-on Practice : Redhat Lab What is Redhat Openshift Provides an enterprise-ready Kubernetes environment for building, deploying, and managing container-based applications Course Code : DO280 Exam Code : EX280 [2 attempts] Hands-on Practice : Redhat Lab

Redhat Courses Read More »

Redhat Openshift Interview Questions – The 10 : Part 1

In this series, sharing the interview questions asked to our students in various MNCs , startup, product based, service based companies. The first 10 questions for your eyes below – Will get complex questions with scenario as and when we move on with parts Openshift Interview Questions – The Ten : Part 1 Application fails to deploy – What will be your next step Application encounters performance issues in an Openshift environment – What will you do to fix it What are the aspects to ensure High availability ? Difference between High availability and consistency How do you arrive at resource allocation for your environment Your experience in optimizing openshift environment Who are your stake holders What is network policy and given an example of implementing it in your environment’ Do we have inbuilt CICD feature in Openshift, if so have you implemented – Brief What are the automation possibilities in Openshift, your experience or knowledge

Redhat Openshift Interview Questions – The 10 : Part 1 Read More »

How to calculate number of nodes, cpu, memory, core required for my Redhat Openshift Cluster

Vanakkam all , Often the questions from students are regarding how do we calculate the node capacity, resources to be reserved, number of vcpu etc for an environment. To start with– irrespective of the tools , be it Redhat Openshift or Middleware or Kubernetes or database etc, all estimations are based on the number of applications we deploy, size of those applications, usage of the applications. Initially, Architects work along with Product owners to understand the client requirement, application usage, estimated number of users who would be accessing the application. Then, the role of developers comes in, to understand the JVM usage, load testing, framework, average footprint of applications. Based on these initial discussion, the estimation starts. Factors to be considered : How many pods to be deployed Application framework Historical load of applications Average memory footprint of the applications Per Node – Memory capacity, number of vCPUs Reservation capacity for autoscaling To summarize: With respect to Redhat Openshift, the important details in estimating the size would be, how many pods has to be running for application availability, resiliency. The picture above explains the rest

How to calculate number of nodes, cpu, memory, core required for my Redhat Openshift Cluster Read More »

AI : Atlassian Intelligence, My Mom’s Grocery list vs Companies tools list

Vanakkam all Today Atlassian launched AI across their products like Jira software, Confluence, Jira Service Management and more. While AI is existing for a long time, this became popular after ChatGPT disruption. From top brands to small products, AI has blended everywhere. Atlassian Intelligence : Partnership between humans and artificial intelligence (AI) Atlassian Intelligence is going to help reduce several hours of search effort and give you the answers on a silver plate. Nearly 10% of Atlassian’s 265,000+ customers have already leveraged Atlassian Intelligence through beta program.In short the overall testimonies of AI are – Atlassian Intelligence boost the individual productivity, no need to spent several mins, hours to read, understand and summarize the PIR reports, create tickets instantly, look for Q&A instantly, slack integration etc. Key Features: Human AI Collaborations Generative AI Editor | Create user stories instantly AI Powered Summaries | Get upto speed on any topic in confluence Natural Language Automation AI Definitions | Demystify jargons, concepts or acronyms Natural Language to JQL, SQL Q&A search Virtual Agents AI Definitions: When I read this feature, I remember my colleague ‘Bhavya’ from previous company . During one of the lab week, she pitched in an idea regarding Demystifying the meanings of jargons, difficult words, foreign language as on-screen one-click display for OTT screenings : Movies, Series. To my leaders from previous organizations: Atlassian Intelligence is really going to help Directors and senior leadership to save time and efforts. Just try this – schedule a meeting, invite everyone in the BU, share the screen and explore the AI possibilities in Atlassian – Jira core, Jira Service management, Confluence etc. This will help to get feedbacks and mainly the reactions from the room , you can decide how much time and effort you are going to save going forward . Try below for sure Virtual Agents : This can respond to help requests on Slack | Jira Service management Generative AI Editor : Most of the time, leaders tend to track few tasks which is being discussed on a general meeting but we miss to do so, as we depend on a team member to create ticket. Now, instantly create a ticket on your own by typing few words on the go ! AI Powered Summaries : Last but not the least – There are several instances were we have to spend several mins,hours to understand a technical documentation before attending a technical meetings, were the architect/teams can go through several pages of confluence document. Now, just 5 mins before the meeting, open confluence – Atlassian Intelligence helps to summarize the whole doc in crucks . Now read, understand and shoot your questions in meeting to clarify the doubts and lead the direction in call My mom’s grocery list Vs AI tools list Today when I returned from office, my mom gave me the grocery list . While seeing this and at the same time reading AI, I just felt that how the future of IT companies will be – soon Procurement team will be ordering : 2 – AI based Atlassian tools : Jira core, confluence 2 – AI based Redhat tools : Ansible, Openshift 1 – AI based communication tool : Slack 3 – AI this 4 – AI that Time for all engineers to think out of the box : While AI is emerging, remember its reading the existing stuff/machine learning and suggesting, summarizing, giving solution from within and what is existing. As a human, lets all think better, think out of the box, think and try implementing which you feel is impossible or not yet explored. Try all possible POC( Proof of concept) at the same time be time cautious too and mindful of the resources you have, resource you are going to put to use. AI – Suggest a Baby name plz : AI started playing a crucial role, including keeping a name for your new born baby. Soon : Type as – suggest a name for my baby | AI will analyze your browsing history, location, work nature, cultural aspect, movies booked, favorite movie star, most used names and it will suggest you a name – For instance Rajinikanth 🙂 .12-12-23 : Today is superstar Rajinikanth birthday | Happy birthday to Mr.Thalaiva. AI in CubenSquare : We in CubenSquare believe AI means‘All Illama Mudiyathuda’ (meaning: not possible without human ) and that’s the front poster I have in my office entrance AI – All Illama Mudiyathuda ( meaning : Not possible without human ) .

AI : Atlassian Intelligence, My Mom’s Grocery list vs Companies tools list Read More »

Michaung cyclone – Chennai : No Power, No Network, No communication

Vanakkam all Like everyone in Chennai, I am also one of the victim of this cyclone, person who couldn’t go directly to support one of my friend and his team of 12 students who didnt have food at night on the day of heavy rainfall – couldnt go directly due to unpredicted water logging everywhere, road closure, poor network. How long are we going to keep blaming the government and speak up only in social media. Lets come to reality, lets get down on the street and know the reality and act accordingly, prepare accordingly. Prediction and Failure : We predicted that there will be cyclone, heavy rainfall, impact, but we failed to predict – the amount of rainfall – how fast the rivers, lakes would get filled in – what would be the impact if the water were released at higher cusecs – what are the areas which would get impacted – low lying areas, apartments in low lying areas, streets in low lying areas – Previous metrics – new construction areas – Roads with improper patch work – Proper care for cars during flood/how to avoid cars getting floated like toys – transformers near our area – number of workers in EB office, so that we understand their situation too during impacts AI/ML : Practically speaking, AI/ML and other technologies didn’t save people from flood rather all these tools just helped to post the pictures, videos, metrics, situations of Chennai. All these technologies are advancing due to consumer expectations and demand. Be it food industry, health care, telecommunication, manufacturing, Government, Retail and e-commerce, Education, Finance. Its all good but are we going to use these technologies to learn the impact, be reactive and give a solution using all these technologies ? IT Engineers – dont blame the govt, lets blame ourselves : Being an IT engineer, all of us tend to move towards niche technologies, supporting giant companies, using AI/ML etc, but we all failed to predict, learn, we are still failing in understanding what happened, how it happened and what we can do to avoid this next time. This happened in 2015, this happened now in 2023 – same impact. We all discuss through whatsapp, twitter, insta etc about the impact, respond back with all sort of emoji’s but fail to take care of ourselves, our street, our known circle. War room : Once you resume back to office, keep aside your regular tasks, dont discuss about contributing to flood affected people/areas with money, food etc – let that happen automatically without discussion . Discuss about why it happened, why are we facing this every year, areas impacted, streets impacted, which lake/river water were released and at what speed/cusecs, pictures of the impacted areas, transformers nearby the houses, safety measures, what helped , what not helped, what an individual can contribute etc First help yourself, your family ,your neighbor, your street, then a bigger radius. Always thinking about the other end of the radius , their impact and we forget about our own surroundings. Be Practical : I also dont know the answer. I am also one of the victim of this cyclone and this happens every year to me as well. We cannot sell the property because we bought in a low lying area. We cannot park the car on the bridge when the cyclone is announced. Cloud, Devops, Servless, Containers : super – Prediction and preparedness during announcement of Queen’s passing in September – Prediction and preparedness during IPL – Prediction and preparedness during ICC worldcup – Prediction and preparedness during Football worldcup We predicted well, prepared well , used all possible technologies – serverless : Cloudrun, lamda etc, Containers, kubernetes, Redhat openshift, scaling features. We succeed and celebrated our client success – which is good and required too. I dont deny. But we failed when it impacted us. We predicted but didnt prepare for the worse. So whats next – summary – moral of the story : This time, I am not going to wait for any announcements, not going to keep scrolling social medias for everyones advice, suggestion, do this-do that etc. I am just going to check for my area – lakes/river details, capacity etc – EB office – Transformers nearby – Hospitals nearby – Generator/invertor backup including maintenance schedule – Dry fruit stock during cyclone prediction – Inform corporation to clean up the street drainage system in advance – Cut down trees which are crossing the EB lines – Fill up the water tanks during cyclone prediction -Create a whatsapp group with friends from low lying areas to know the impact and help them on all possible ways ( really missed this , this time ) – Knock every door of the neighbor, alert them on all these One last thing would be : sharing all these to neighbors, known circles, so they can share to others if they find those useful. Lets all work together to bring back ‘Namma Chennai’ to its feet and make it ‘Singara Chennai’ as always irrespective of cyclones, flood. Ethu enga ooru , enga Chennai !! Vanakkam

Michaung cyclone – Chennai : No Power, No Network, No communication Read More »

Redhat Openshift on AWS, Autoscaling & Akshya Patra

Autoscaling : Be it the announcement of Queen’s passing in BBC or live streaming sports – autoscaling is one of the rich feature which keeps the website, app stable with the rapid increase in viewers Classic example of autoscaling would be live streaming sports. Recent worldcup Cricket, Football matches were watched by viewers all over the world. Depending on the score, goals, team histories, the number of viewers keeps changing. IPL: During last IPL, it was evident on the Live streaming screen that, when Dhoni enters for batting, the number of viewers would drastically increase and when he is out, it will go back to nominal viewers count. These fluctuations were handled by autoscaling , Containers Queen’s Passing announcement: BBC relied upon autoscaling – “Around the time of the announcement of the Queen’s passing in September, we saw some huge traffic spikes. During the largest, within one minute, we went from running 150 – 200 container instances to over 1000…. and the infrastructure just worked” My personal experience on prescaling instead of autoscaling : For one of the event, client projected that the number of viewers would instantly spike out. To handle such instant spikes, the Kubernetes version v1.11, didn’t have an option of configuring autoscaling to spin up & scale new pods at short time frame. The pods wont scaleup faster enough to handle the viewers spike. Hence,we had to keep aside the autoscaling feature and enable pre-scaling of PODs before the event. ROSA : Redhat Openshift Service on AWS ROSA – You never need to worry about the underlying platform or its complexity of infrastructure management. Those will be handled by Redhat and AWS SRE team. You just focus on delivering value to customers by building and deploying applications Like other Container orchestration tools, ROSA also features autoscaling in 2 aspects –Horizontal Pod autoscaler : Automatically scale up/down pods –Vertical Autoscaler : Automatically scale up/down nodes In ROSA, cluster autoscaling is set per machine pool definition. To add autoscaling to machine pool, run the following command : rosa edit machinepool -c <cluster-name> –enable-autoscaling <machinepool-name> –min-replicas=<num> –max-replicas=<num> Akshaya Patra : You might think, whats the part of Akshaya Patra here. To me, when ever I discuss about autoscaling, I co-releate Akshaya Patra. Akshya Patra is a legendary copper vessel featured in the epic Mahabharata. It is a divine vessel which offered a never-depleting supply of food to the Pandavas every day. Similar way, the current sophisticated tools like ROSA, Kubernetes, Cloud Providers offer never ending supply of resources to the ondemand & unpredictable number of viewers accessing the online apps across the world. Summary : No more manual scaling . Not having to manually manage the scale of major components of the stack frees up the time and the same can be utilized across other areas. For frequent, unpredictable large spikes of traffic environments, make use of autoscaling features more efficiently and help your customers have reliable user experience and at the same time keep a tab on pricing too.

Redhat Openshift on AWS, Autoscaling & Akshya Patra Read More »

Reason for Migration from Opensource Kubernetes to Red Hat Openshift – My Client experience

Vanakkam all Red Hat Openshift product is exceptionally doing well in the market. Different industries started adopting Openshift along with opensource kubernetes solution – Finance, Telco, Healthcare, Entertainment industries etc. We also, recently placed few of our students in middleeast on Red hat Openshift Job. Regarding migration from Kubernetes to Openshift, I would like to share one of my recent client experience. Got a chance to provide consultation for one of the Healthcare firm. Kubernetes is running on both onprem and cloud environment. Its a 7 member team including the lead from India and 2 resources from US. Listing few Kubernetes day to day tasks being handled by them : Kubernetes cluster installation Kubernetes Cluster upgrade Deploy applications Helm chart creation Namespace resource quota & limit range management Setting up monitoring & alerting Configure PV,PVC as appropriate Security & Access control Common issues, tickets raised by developers, end users would be on : Resource management : CPU,Memory,Storage bottlenecks Application performance issues Networking issues : Either issue from external source, network policy errors, DNS resolution issues Volume mount issues : Persistent storage issues Cluster upgrade issues Rollbacks Unavailability of proper documentation Practical issues : Senior resource dependency While my client has experts & architects in Kubernetes, retaining the experts in the organization would be a practical challenge and the senior resource knowing solution for all issues is not possible too . During migration from traditional to Containerization, every organization will go through several process, analysis, fine tuning, sophisticated architecture, tools to adopt that environment and requirement. Rely upon team experts incase of any downtime Lack of proper documentation, will lead to knowledge gap on environment and related tools Attrition of senior resource will also be a contributing factor. Because new members in team, will have limited idea/skills to troubleshoot the kubernetes issues and the entire account would rely upon their skills to solve the issue. The downtime might get prolonged until the resource fixes the issue A quote from my friend : “When an engineering team introduces a new architecture or tool, it should always consider how operations will support it once it hits production“ Engineering is not something you design or introduce new tool just to satisfy your technology curiosity or broadcast about a change in your environment rather understand the problem statement properly and then go for it which will benefit the org/customer. The reason for stating above is, most of the time, team ends up in creating a sophisticated architecture and tool inclusion, but fails to handle the issue when there is a downtime – Lack of skillset or missing the right resources Kubernetes CNI, DNS, Network issues: Team faces several issues on Kubernetes network Issues getting reported on CNI, DNS which results in application downtime The current team with limited knowledge, limited documentation is unable to handle the issue and this runs for more than 72 hours across all environment Due to opensource limitation, no product level support Decision being made to migrate from Kubernetes : After all possible discussions on technical way out, Leadership decides to switch from Kubernetes to Red Hat Openshift Resource dependency gets minimized : Redhat features include enterprise support, Openshift webconsole, Easy navigation, easy updates, operators, monitoring etc with proper documentation By migrating to Red Hat openshift, even when the senior resources/team struggles to fix a problem, we can always raise a redhat support ticket and rely on Red Hat support too Also Ops team can rely upon the detailed Red Hat documentation

Reason for Migration from Opensource Kubernetes to Red Hat Openshift – My Client experience Read More »

Project Signed – African client : Ecommerce – Setup & Application Support

Vanakkam all, The first two projects and its progression has helped to bag the next one – Ecommerce setup & application support . During my corporate days, we used to have a Sales team, Project team to market the product, service and then decide on the deals. The sales team will be given the target to sell the product, service and they do meet the expectation by successfully bagging the deals. The next part of the story is from the Project team. At times, the Project team would refuse to take up the deal due to various practical reasons : team bandwidth, skill set, number of resources, tools & technology availability, the complex ask from customer etc. Finally, the leader common to both sales & project team gets looped in, his final conclusion irrespective of the pointers, discussion from project team – “Lets go for it and meet our customer expectation” Both the teams are right from their angle as I have been in their shoes before. And the good part is , they do deliver the expected results. Being a startup, I often play both the roles now, but finally conclude by bagging in the project and make sure the team is ready to deliver. At the same time, thanks to all the friends who are helping me throughout this journey.

Project Signed – African client : Ecommerce – Setup & Application Support Read More »

Red Hat Openshift Admin Day to Day Activities-Part III

Troubleshooting user queries Openshift admin role involves most of the time spent on user queries and addressing the alerts. Often the users will ping in slack channels and report issues with the errors or we get the alerts through slack channel. The respective resources in that shift is responsible to address those. We have several categories under this topic, I will just talk about few . POD Issues Logging POD issues : To start with POD issues and what we do to address : 1. Pod Fails to Start: Check Logs: –> oc logs <pod_name> to identify errors or exceptions. –> Resource Constraints: Ensure the pod’s resource requests and limits are appropriate Image Availability: Check if correct image name is provided as in registry Security Context: Check for SCC ( security context constraint ) 2. Pod crashloop backoff : Crash Loops: Investigate if the application inside the pod is crashing and restarting repeatedly. oc logs <pod-name> : Check logs for crash reports Resource Constraints: Insufficient resources might cause the pod to be terminated and restarted repeatedly. Openshift tries to restart the POD to see if the POD can get required memory. If its not getting required resources after few restarts, the POD will start crashing 3. Networking Issues: Service Connectivity: Check for NetworkPolicy ( allow/deny the network connectivity between POD-POD, POD-service, External access ) 4. Volume Mounting Problems: Mount failures will result in POD failures Permissions: Check file and directory permissions if the pod has issues writing to mounted volumes Infra changes: At times, infra team making changes on FS, restart the OS with improper FS access mode can also result in above issues 5. OOM Killed Out of Memory : Check for resources being utilized, available. 6. Image Pull Errors: This happens if the private registry authentication is incorrect. Image Availability: Verify the image repository, tag, and digest. Images might be removed or unavailable. 7. Node Issues: Node failures can also affect the POD status . Check with infra team on the heath of Nodes Logging : Openshift provides extensive logging capabilities to monitor, troubleshoot. In Kubernetes, we do not have these as native options rather we need to go for external options.

Red Hat Openshift Admin Day to Day Activities-Part III Read More »

Red Hat Openshift Admin Day to Day activities : Part-II [Banking Domain]

Article on ‘Openshift admin day to day activities : Part-I’ ( https://www.linkedin.com/posts/activity-7121099149671895040-BJwQ?utm_source=share&utm_medium=member_desktop) covered activities of a student working in UK. This part-II is from a student/professional working in banking domain – Overall 8 years of experience, relevant experience would be 3.5 years – L3 admin. At high level, he gets below request through Jira as user story : Deployment failures Certificate Management Troubleshooting user queries in terms of k8s objects related Ingress traffic related Egress connectivity related Managing multi cluster upgrades Manage the clusters running in 2 datacenter Support private / public cloud environment Service mesh upgrades Logging related query Monitoring enablement and queries Production outage troubleshooting Incident/change implementation and jira task and other image related vulnerability fixes Considering the experience level, the number of issues and tasks assigned to the individual is high. I have just detailed couple of tasks in this article. To avoid lengthy pages, will split the tasks and cover in upcoming articles. Deployment Failures : Similar to Kubernetes or any other middleware technologies, OpenShift deployment failures can also occur due to various reasons, ranging from issues in your application code to problems with configurations, resources, or the OpenShift platform itself. – Often developers and Engineering team get into a discussion of pointing to each other on the root cause – Few common issues outlined below : 1. Check Application Logs: –> oc logs <pod-name> –> oc get events –> oc describe pod <pod-name> 2. Resource Constraints: –> Ensure pods have sufficient resources – CPU, memory allocated. 3. Image Pull Issues: –> Network issue between cluster and registry –> Verify that the container images specified in your deployment configuration exist –> Check image names, repositories, and authentication requirements 4. Network Policies: –> oc get networkpolicy , oc describe networkpolicy -o yaml –> Network policies are all about allow and deny the request for PODs –> Check if its restrict communication between pods and services –> Check if the PODs are allowed to communicated to other end points 5. Environment Variables: –> Check configuration files and secret references –> Incorrect references may lead to failures 6. Service Endpoints and Ports: –> oc describe service <service-name> –> Check for endpoints ( POD ip address ) –> check if required ports are exposed 7. Volume Mounts and Persistent Storage: –> oc describe deployment <deployment-name> –> check the volume details , check for volume mount 8. OpenShift Cluster Health: –> oc get nodes –> oc describe node <node-name> –> oc adm node-logs -u kubelet <node-name> –> Check for kubelet, crio service status –> Check inbuilt openshift dashboard Certificate Management : – There are multiple certificates in Openshift to ensure secure communication between components – Certificates of API server, ETCD, Router, Registry, Metrics, Console, Kubelet, to be managed – All these certificate renewals, rotations are one of the key tasks – Better to automate the renewal through any customized script

Red Hat Openshift Admin Day to Day activities : Part-II [Banking Domain] Read More »

People Manager

As a People Manager, always do not lookup to Elon musk, Mark Z or other Global leaders comments rather understand YOUR leaders strategies & way forward   Recent news across IT industry, my friends & students in overseas : Companies are getting rid of middle managers/People Managers, at the same time hiring engineers, technical managers. Now the question is it true ?! – In my experience, I would say this is there for a long time. Now its getting visible, transparent across the globe. People Managers View : We focus on managing, developing the employees, their development plan, career growth, Job satisfaction, mentoring, appraisal, feedback, resolve conflicts, mainly maintain the team as a team and an environment to enjoy the work and meet the goal. Also we have strong communication skills and interpersonal skills. 80% People Management – 20% Technical My opinion : Time for People Managers to focus on technical aspects, learn niche technologies and take up projects with 80% technical and 20% people management. This will help you survive in the IT world and boost up your confidence rather often think that ‘we feel undervalued by our own team and leaders, underpaid, not in promotion list etc’. During my tenure in corporate, often the cafeteria discussion around people managers – “leadership is not recognizing my efforts, no promotion, no hike, focus is on engineers until architect but not for us who manages the team”. This applies for both service based and product based companies. Instead of spending time in thinking all these, looking at global leaders comments, wasting time in swiping the social media, focus on your technical skills, understand your leaders strategies and travel with them to achieve the goal together. “If you are in complaining mode on leadership consistently , then better – RESIGN, find a company which values your people management skills”. But understand the long term growth and then decide on your path. The firing mode is ON in US, it wont take much time to reach India. In simple terms : Leaders give ‘feather on your cap’ – mails & ‘Best Manager of the month’-awards to people managers, Pay hike & promotions for technical managers. Go see yourself today !. Be Practical and add value to yourself & business. 100% you can . Go technical – Enjoy the journey !! Become ‘80% Technical & 20% People Management’ from ‘80% People Management & 20% Technical’.

People Manager Read More »

Red hat Openshift Administrator’s Day to Day activities

Vanakkam all Last week had a discussion with one of CubenSquare student, who is working as Redhat Openshift Admin for the past 3 years in UK. He started with L1 support and now into L3 due to demand and skills. He shared his day to day activities as a Openshift Admin . By reading this, you can understand what an Openshift admin does on his role and also helps to prepare for your interview by assuming the questions around below topics. At high level, he gets below request through Jira as user story : – Design the Openshift Cluster – Provide logging solutions – Cluster scaling – Registry solutions – Namespace creation and administration – RBAC – Installing and managing operators – Cluster upgrade – Application Migration – Security – Troubleshootings Lets breakdown the tasks: 1. Design cluster – to build a cluster, he needs to analyze the cluster size, node size, number of workers, number of infra nodes, type of storages, type of authentication to the cluster, type of load balancer to use 2. Logging solution involves understanding of external logging soliution . Think about splunk or any other logging tools . OCP is not a logging platform(meaning not ideal to store all logs inside the cluster), as per RedHat recommendation, all logs( audit logs, infra logs and application logs) should be stored outside the cluster. So as an OCP Engineer, we can have an external logging solution like Splunk or whatever used in your org, use clusterlogforwarder in ocp to send logs. 3. Cluster scaling, handling the workloads in the cluster. How does it impact. Solutioning around these. 4. Registry solutions. How do we store and manage docker images both for cluster and for applications. 5. Namespace : Maintain and manage projects/namespaces – set resource quota and limitranges. check if project templates are required 6. RBAC: What kind of RBAC to be created and managed and maintained in the cluster for both administrators and for consumers 7. Installing Operators in the cluster Cluster Upgrade/Patching 1. If RedHat releases a new ocp version, doing research around the new version, analysis and gathering requirements of it. Identifying new features, degraded features and how does it impact our existing clusters if we plan to upgrade. 2. Regular patching of clusters Migration/Developer Experience 1. Working closely with the consumers to get them migrate their applications to the platform 2. Assisting them with resources such as pods, services, network policies, pvc, etc… 3. Creating helm charts 4. Solutions for their applications and workloads Security 1. Managing ACLs 2. Hardening the cluster. Say for example, using compliance operator inside the cluster. 3. RBAC for service accounts and how they can be managed and maintained. Suggested way is to use least privileged roles and rolebindings 4. Scanning of images using inside the cluster. Can use external solutions like aqua, jfrog xray scan, etc 5. Network policies In our next article we shall discuss about Day to Day activities of a AWS Cloud Engineer. Thank you

Red hat Openshift Administrator’s Day to Day activities Read More »

Kanyakumari- Windmill & Technology

Now I am in my native – Kanyakumari , the southern most tip of India , for a technical session. Just thought of sharing the view of windmills . Also we will be discussing on the windmills tech stack background in ‘Freshers Pakkam – A Fresher’s only page ‘ . Cloud , Devops , Monitoring, Operations & Windmills. The technology stack used in delivering ‘sustainable energy solutions’ includes – Analytics – Cyber security – Control & Monitoring – Remote Operations – Optimization – Overall digital solutions Digital solutions to deliver greater predictability, increased renewable energy production, more efficient operations …. So – Cloud , Devops is everywhere ✍️

Kanyakumari- Windmill & Technology Read More »