How to calculate number of nodes, cpu, memory, core required for my Redhat Openshift Cluster

Vanakkam all ,

Often the questions from students are regarding how do we calculate the node capacity, resources to be reserved, number of vcpu etc for an environment.

To start with– irrespective of the tools , be it Redhat Openshift or Middleware or Kubernetes or database etc, all estimations are based on the number of applications we deploy, size of those applications, usage of the applications.

Initially, Architects work along with Product owners to understand the client requirement, application usage, estimated number of users who would be accessing the application. Then, the role of developers comes in, to understand the JVM usage, load testing, framework, average footprint of applications. Based on these initial discussion, the estimation starts.

Factors to be considered :

  • How many pods to be deployed
  • Application framework
  • Historical load of applications
  • Average memory footprint of the applications
  • Per Node – Memory capacity, number of vCPUs
  • Reservation capacity for autoscaling

To summarize: With respect to Redhat Openshift, the important details in estimating the size would be, how many pods has to be running for application availability, resiliency. The picture above explains the rest