Openshift Q&A

SET – 1
1. What is OpenShift?
OpenShift is an open-source container application platform based on Kubernetes. It
helps developers develop, deploy, and manage containerized applications.

2. What are the key components of OpenShift?
Master: Manages nodes and orchestrates the deployment of containers.
Nodes: Run containers and handle workloads.
ETCD: Stores cluster configuration data.
OpenShift API: Handles API calls.

3. How does OpenShift differ from Kubernetes?
OpenShift extends Kubernetes by adding features such as a web console, a built-in
CI/CD pipeline, multi-tenant security, and developer tools. It also has stricter security
policies.

4. What is Source-to-Image (S2I) in OpenShift?
S2I is a process that builds Docker images directly from application source code,
making it easier to deploy apps without writing a Dockerfile. It automatically builds a
container from source code and deploys it in OpenShift.

5. Explain the difference between DeploymentConfig and Deployment in OpenShift.
DeploymentConfig is specific to OpenShift and offers additional control over
deployment strategies, hooks, and triggers, whereas Deployment is a Kubernetes
native resource for deploying containerized apps.

6. How does OpenShift manage storage and persistent volumes?
OpenShift uses Persistent Volume (PV) and Persistent Volume Claim (PVC) to provide
dynamic and static storage for containerized applications. It supports different
storage backends like NFS, AWS EBS, and GlusterFS.

7. How do you handle multi-tenancy and security in OpenShift?
OpenShift uses Role-Based Access Control (RBAC), Security Context Constraints
(SCC), and Network Policies to handle multi-tenancy. SCCs define the security rules
for pods, and RBAC defines access control based on user roles.

8. Explain how you would implement CI/CD pipelines in OpenShift.
OpenShift has a native Jenkins integration for automating CI/CD pipelines. It can be
set up using OpenShift’s BuildConfigs and Jenkins Pipelines to automate testing,
building, and deploying applications.

9. What is OpenShift Operator Framework, and why is it important?
The Operator Framework in OpenShift automates the deployment, scaling, and
lifecycle management of Kubernetes applications. It allows applications to be
managed in the same way Kubernetes manages its components.

10. How would you design a highly available OpenShift cluster across multiple regions?
Use a multi-region architecture with disaster recovery features. Utilize load
balancers (like F5 or HAProxy), configure etcd clusters for consistency, and use
persistent storage replicated across regions. Also, use Cluster Federation for
managing multiple clusters.

SET – 2
1. What is an OpenShift project, and how is it used?
An OpenShift project is a logical grouping of resources, such as applications, builds,
and deployments. It provides a way to organize and manage resources within a
cluster.

2. How do you secure an OpenShift cluster?
Implementing RBAC to limit access.
Using Network Policies to control traffic between pods.
Enabling SELinux and Security Context Constraints to enforce pod-level security.
Encrypting sensitive data in etcd and using TLS for securing communication.

3. How would you perform an OpenShift cluster upgrade?
Plan upgrades by checking the OpenShift compatibility matrix, backing up etcd, and
testing the upgrade in a staging environment. Perform upgrades using the OpenShift
Command-Line Interface (CLI) and ensure high availability by performing a rolling
upgrade.

4. Explain the concept of a pod in OpenShift.
A pod is the smallest unit of deployment in OpenShift. It represents a group of
containers that share a network namespace and are scheduled together.

5. What is a route in OpenShift, and how does it differ from a service?
A route defines how external traffic is routed to services within a cluster. It acts as a
virtual host for your applications. A service is a logical group of pods that provide the
same functionality.

6. Explain the concept of a deployment configuration in OpenShift.
A deployment configuration defines the desired state of an application, including the
number of replicas, image, and resource requirements. It also handles rolling
updates and scaling.

7. What is the role of a build configuration in OpenShift?
A build configuration defines the process for building container images. It can be
triggered by source code changes or scheduled events.

8. What is the difference between a stateful application and a stateless application in
OpenShift?
A stateful application stores data that persists across restarts or failures. Examples
include databases and message queues. A stateless application doesn’t require
persistent data and can be easily scaled horizontally.

9. How do you manage persistent storage in OpenShift?
OpenShift provides options like Persistent Volumes (PVs) and Persistent Volume
Claims (PVCs) to manage persistent storage for stateful applications.

10. What is Route in OpenShift Container Platform?
You can use a route to host your application at a public URL(Uniform Resource
Locators). Depending on the application’s network security setup, it can be secure or
insecure. An HTTP(Hypertext Transfer Protocol)-based route is an unsecured route
that provides a service on an unsecured application port and employs the
fundamental HTTP routing protocol.
SET – 3
1. What are Red Hat OpenShift Pipelines?
Red Hat OpenShift Pipelines is a cloud-native continuous integration and delivery
(CI/CD) system based on Kubernetes. It uses Tekton building components to
automate deployments across several platforms, abstracting away the underlying
implementation details.

2. Explain how Red Hat OpenShift Pipelines uses triggers.
Create a full-featured CI/CD system with Triggers and Pipelines in which Kubernetes
resources define the entire CI/CD process. Triggers capture and process external
events, such as a Git pull request and extract key pieces of information.

3. What can OpenShift Virtualization do for you?
The OpenShift Container Platform add-on OpenShift Virtualization allows you to
execute and manage virtual machine workloads alongside container workloads.
OpenShift Virtualization uses Kubernetes custom resources to introduce additional
objects to your OpenShift Container Platform cluster to enable virtualisation jobs.

4. What is the use of admission plug-ins?
Admission plug-ins can be used to control how the OpenShift Container Platform
works. After being authenticated, admission plug-ins intercept resource requests
submitted to the master API and are permitted to validate resource requests and
ensure that scaling laws are obeyed.

5. What are OpenShift cartridges?
OpenShift cartridges serve as hubs for application development. Along with a preconfigured environment, each cartridge has its own libraries, build methods, source
code, routing logic, and connection logic. All of these elements contribute to the
smooth operation of your application.

6. Define labels.
Labels are used to organise, group, and select API objects. Label selectors are used
by services to decide which pods to proxy to, and pods are “tagged” with labels. This
allows services to refer to groups of pods, even if the pods themselves have different
containers.

7. Differentiate OpenStack and OpenShift?
The most significant difference is that OpenStack provides Infrastructure as a Service
(IaaS) (IaaS). OpenStack differs from OpenShift in that it gives bootable virtual
machine access to objects and block storage.

8. Define custom build strategy.
The custom build strategy allows developers to select a specific builder image that
will be in charge of the entire build process. Using your builder image, you can
customise your build procedure.

9. Enlist a few build strategies that are used in OpenShift.
 Custom Strategy
 Source to image Strategy
 Docker Strategy
 Pipeline Strategy

10. How OpenShift use Docker and Kubernetes?
Kubernetes and Docker could be used as a control system for OpenShift. Many
deployment pipelines are enabled by the control system, which is suitable for later
usage in auto-scaling, testing, and other procedures.
SET – 4
1. Why do we need DevOps tools?
The use of DevOps tools can greatly increase the flexibility of software delivery.
Furthermore, DevOps tools aid in increasing deployment frequency and decreasing
failure rates. DevOps tools also contribute to quicker recovery and better time
management between fixes.

2. What is meant by application scaling in Openshift?
Auto-scaling is also known as pod auto-scaling in the OpenShift application. These
are the two categories of utilisation scaling.
i. Up (vertical scaling): Using this technique, your application remains in the
same location while receiving extra resources to accommodate a greater
load.
ii. Out (Horizontal scaling): A number of samples of an application are created,
and the application load is modified across freehubs, in order to
accommodate larger burden via level scaling.3. Explain about Openshift Cil?
From the order line, OpenShift apps are managed via the OpenShift CLI. End-to-end
application lifecycles are possible using OpenShift CLI. Every basic and advanced
design, the board, expansion, and utilisation organisation can be played out using
the OpenShift CLI.

4. What is meant by features toggles?
The two forms of your element are retained for a similar codebase via feature
toggles. The deployment can be separated using this technique from various server
groups, outdated monoliths, configurations, and single server groups.

5. Explain about Haproxy on Openshift?
If your application runs on OpenShift, HAProxy sits in front of it and notifies it of any
incoming connections. In order to determine which application case the association
should be directed to, it parses the HTTP convention. This is significant since it
enables persistent sessions for the client.

6. What is meant by Openshift Security?
The majority of OpenShift security is a combination of two components that
fundamentally manage security requirements.
i. SCC (Security Context Constraints)
ii. Service Account

7. What is Volume Security?
Volume security refers to protecting the PVC and PV of OpenShift cluster projects.
OpenShift has four main elements for managing volume access: runAsUser, fsGroup,
seLinuxOptions, and Supplementary Groups.

8. What Is Blue/green Deployments?
The Blue/Green deployment method makes sure you have two variants of your
application stacks accessible during the deployment, which reduces the amount of
time it takes to complete a deployment cutover. We can quickly transition between
our two active application stacks by utilising the service and routing tiers.

9. What Is Deployment Pod Resources?
A pod uses resources (memory and CPU) on a node to complete a deployment. Pods
automatically use all available node resources. Pods use resources up to the default
container limits specified by a project, though.

10. What Is Rolling Strategy?
A rolling deployment gradually replaces instances of an application’s older version
with instances of its newer version. Before degrading the old components, a rolling
deployment normally waits for new pods to reach readiness via a readiness check.
SET – 5
1. What Is Haproxy On Openshift?
On OpenShift, if your application is scalable, HAProxy sits in front of it and accepts all
incoming connections. It parses the HTTP protocol and decides which application
instance the connection should be routed to. This is important as it allows the user
to have sticky sessions.

2. What Are Stateful Pods?
Pods can be stopped and restarted with StatefulSets, a Kubernetes feature that
keeps their network address and storage intact. StatefulSets (PetSets in OCP 3.4) are
still a beta feature, but complete support ought to be included in a subsequent
update.

3. Name some identity providers in OAUTH.
The identity providers in OAUTH are HTTPassword, LDAP, Allow All, Deny All, and
Authentication.

4. What do you know about OpenShift Kubernetes Engine?
With Red Hat’s OpenShift Kubernetes Engine, you can use a production-ready
Kubernetes infrastructure that is built for businesses. The OpenShift Kubernetes
Engine has the same SLAs, bug fixes, and defenses against typical flaws and
vulnerabilities as the OpenShift Container Platform.

5. What do you understand by service mesh?
A service mesh is the web of microservices that make up applications in a distributed
microservice architecture, as well as the connections between those microservices. A
Service Mesh may become challenging to understand and maintain as it becomes
larger and more complicated.

6. What is the procedure followed in Red Hat when dealing with a new incident?
An incident is an occurrence that causes one or more Red Hat services to degrade or
go down. A client or a member of the Customer Experience and Engagement (CEE)
team can report an incident via a support case, the centralised monitoring and
alerting system, or a member of the SRE team. The severity of an incident is
determined by its impact on the service and the client.

7. What is the concept of the OpenShift container?
This container can create a platform for the development team, testing team, and
hosting team to deploy the application in the cloud. These containers use Docker
technology to upload to the server.Two package levels of Openshift container are:
 OpenShift Container Local
 OpenShift Container Lab.

8. What are the main components of OpenShift?
 Kubernetes Master Machine Components: Etcd, API Server, Controller
Manager, Schedule
 Kubernetes node components: Docker, Kublet service, Kubernetes proxy
service.

9. How do you monitor and troubleshoot OpenShift clusters?
OpenShift offers integrated monitoring tools like Prometheus for metrics and
Grafana for visualizing cluster performance. Kibana is used for log analysis, and
Jaeger for distributed tracing. Additionally, OpenShift provides Cluster Logging and
Cluster Monitoring operators for comprehensive observability.

10. How would you perform an OpenShift cluster upgrade?
Plan upgrades by checking the OpenShift compatibility matrix, backing up etcd, and
testing the upgrade in a staging environment. Perform upgrades using the OpenShift
Command-Line Interface (CLI) and ensure high availability by performing a rolling
upgrade.