Evolution of RedHat Openshift

Sonam Kumari Singh
5 min readJul 31


Openshift & Used for — :

Openshift are popular container management system or a family of containerization software product developed by RedHat.

Red Hat OpenShift is an open source container application platform that runs on Red Hat Enterprise Linux CoreOS (RHCOS) and is built on top of Kubernetes.

OpenShift is a cloud development Platform as a Service (PaaS) hosted by Red Hat. It’s an open-source, cloud-based, user-friendly platform used to create, test, and run applications, and finally deploy them on the cloud ( Means, It’s help us to develop, deploy and manage container-based applications and also provides you with a self-service platform on cloud to create, modify or deploy applications on deman, thus enabling faster development and releass life cycle ).

It is the container platfrom that works with kubernetes to help applications run more efficiently.

OpenShift helps organizations move their traditional application infrastructure and platform from physical, virtual mediums to the cloud.

OpenShift supports a variety of applications, quickly developed and deployed on the OpenShift cloud platform. OpenShift supports three kinds of platforms for developers.

  1. IaaS
  2. SaaS
  3. PaaS

About Kubernetes Infrastructure

Kubernetes manages containerized applications across a set of containers or hosts and provides mechanisms for deployment, maintenance, and application-scaling. The container runtime packages and runs containerized applications. A Kubernetes cluster consists of one or more masters and a set of nodes.

Master Components

API Server


Controller Manager


Architecture of Openshift

OpenShift is a layered system wherein each layer is tightly bound with the other layer using Kubernetes and Docker cluster. The architecture of OpenShift is designed in such a way that it can support and manage Docker containers, which are hosted on top of all the layers using Kubernetes.

In this model, Docker helps in creation of lightweight Linux-based containers and Kubernetes supports the task of orchestrating and managing containers on multiple hosts.

RedHat OpenShift is built on top of Kubernetes. It takes care of integrated scaling, monitoring, logging, and metering functions. With OpenShift, you can do anything that you can do on Kubernetes and much more with OpenShift-specific features.

OpenShift includes everything you need for hybrid cloud, like a container runtime, networking, monitoring, container registry, authentication, and authorization.

OpenShift architecture and components

One of the key components of OpenShift architecture is to manage containerized infrastructure in Kubernetes. Kubernetes is responsible for Deployment and Management of infrastructure. In any Kubernetes cluster, we can have more than one master and multiple nodes, which ensures there is no point of failure in the setup

I explain how OpenShift can do all of that by introducing its architecture and components.

  • Infrastructure layer
  • Service layer
  • Main / Master node
  • Worker nodes
  • Registry
  • Persistent storage
  • Routing layer
  1. Infrastructure layer — : In the infrastructure layer, you can host your applications on physical servers, virtual servers, or even on the cloud (private/public).

2. Service layer

The service layer is responsible for defining pods and access policy. The service layer provides a permanent IP address and host name to the pods; connects applications together; and allows simple internal load balancing, distributing tasks across application components.

There are mainly two types of nodes in an OpenShift cluster: main nodes and worker nodes. Applications reside in the worker nodes. You can have multiple worker nodes in the cluster; the worker nodes are where all your coding adventures happen, and they can be virtual or physical.

Kubernetes Master / Main Node

etcd — : It stores the configuration information, which can be used by each of the nodes in the cluster. It is a high availability key value store that can be distributed among multiple nodes. It should only be accessible by Kubernetes API server as it may have sensitive information.

API Server — : Kubernetes is an API server which provides all the operation on cluster using the API. API server implements an interface which means different tools and libraries can readily communicate with it. A kubeconfig is a package along with the server side tools that can be used for communication. It exposes Kubernetes API”.

Controller Manager − This component is responsible for most of the collectors that regulate the state of the cluster and perform a task. It is responsible for collecting and sending information to API server. The key controllers are replication controller, endpoint controller, namespace controller, and service account controller. The controller manager runs different kind of controllers to handle nodes, endpoint, etc.

Scheduler Determines pod placements while considering current memory, CPU, and other environment utilization.

Kubernetes Worker Node Components

The worker node is made of pods. A pod is the smallest unit that can be defined, deployed, and managed, and it can contain one or more containers. These containers include your applications and their dependencies. For example, Alex saves the code for her e-commerce platform in containers for each of the databases, front-end, user system, search engine, and so on.

Keep in mind that containers are ephemeral, so saving data in a container risks the loss of data. To prevent that, you can use persistent storage to save the database.

All containers in one pod share the same IP Address and same volume. In the same pod, you can also have a sidecar container, which can be a service mesh or for security analysis — it must be defined in the same pod sharing the same resources as other containers. Applications can be scaled horizontally, and they are wired together by services.

HAProxy Service − This is a proxy service which runs on each node and helps in making the services available to the external host. It helps in forwarding the request to correct containers. Kubernetes Proxy Service is capable of carrying out primitive load balancing. It makes sure that the networking environment is predictable and accessible but at the same time it is isolated as well. It manages pods on node, volumes, secrets, creating new containers health checkup, etc.

3. Integrated OpenShift Container Registry

OpenShift container registry is an inbuilt storage unit of Red Hat, which is used for storing Docker images. With the latest integrated version of OpenShift, it has come up with a user interface to view images in OpenShift internal storage. These registries are capable of holding images with specified tags, which are later used to build containers out of it.

4. Persistent Storage

Persistent storage means when we are stopping or terminating the container the data should be persistent(i.e. The files will not delete automatically when the container is deleted ).

Thank you !!💕



Sonam Kumari Singh

SONAM here! Grateful for your connection! Tech enthusiast exploring new languages, deep into DevOps, with a spotlight on Linux. 😊🚀