July 8, 2021 By Budi Darmawan 3 min read

Operators in Red Hat OpenShift clusters are the de-facto standard for adding features and capabilities of a cluster.

Applications and middleware are packaged as operators and available on the OperatorHub. Although most operators can be installed within a few clicks, some more complex operators require a deeper understanding of the infrastructure. Similar to the water for a kitchen sink, most people just need to know that it is available; however, knowing the plumbing underneath the surface is necessary for problem-solving and fixing errors when things do not work as expected.

This article attempts to explain the underlying objects and processes that make up the operators and operator framework. The content in this article is divided into extending the OperatorHub and the deployment of an Operator.

Extending the OperatorHub

The OperatorHub is populated from the content in OperatorSource and CatalogSources. Most of the newer sources are now using the CatalogSource format. I will explain the difference between the CatalogSource and OperatorSource and how they work in a future article:

OperatorSource and CatalogSource.

You can view these sources from the Web console under Administration > Cluster Settings > Global Configuration > OperatorHub > Sources. The following is a screenshot of this menu:

OperatorHub sources.

The catalog source consists of a non-executable container image. The container image contains a file that acts as a catalog of PackageManifests that can be installed. When a CatalogSource is defined, OpenShift creates a Job to load the catalog image, retrieve the individual PackageManifest and create the objects in OpenShift. Each PackageManifest object is a tile that you can see in the Operators > OperatorHub menu of the OpenShift Web Console:

CatalogSource and PackageManifest.

Each of the PackageManifest objects contains a unique definition on how to implement the operators, including the following:

  • Channels: The path for installation and upgrade of an operator package.
  • Cluster Service Version: Package definition for a certain version of the operator, the CSVs allow the operator that subscribes to a channel to dynamically evolve (upgrade).
  • Custom Resource Definition: Part of the CSV that defines the structure of a Custom Resource that the Operator will be managing. 
  • Container images: Images that will be loaded when you install this CSV.

Operator deployment

When you choose to install an Operator from OperatorHub, you create a Subscription object. It is subscribing to a channel in the PackageManifest. The notion of subscribing allows an automatic update (as defined in the installPlanApproval field) when the CSV in the PackageManifest is updated:

Channel and Subscription.

The CSV from the channel is built and generates an installPlan, which contains a list of resources that should be created for this operator. Subscription also defines the Custom Resource Definition that is managed by this operator. Once the installation is successful (the CSV phase becomes Succeeded from the oc get csv command), that indicates that the Operator is installed:

Installed operator.

Once an operator is installed, you have a Deployment with a pod that runs the operator controller process. The operator controller runs a loop that monitors the Custom Resources in its namespace (or all namespaces as defined by the installation method). As a Custom Resource is created, it may perform additional tasks, such as creating more resources in the cluster.

The illustration above triggers the creation of the OpenShift Container Storage cluster based on the content of the StorageCluster custom resource.

Learn more about IBM Garage.

Was this article helpful?
YesNo

More from Cloud

Top 6 innovations from the IBM – AWS GenAI Hackathon

5 min read - Eight client teams collaborated with IBM® and AWS this spring to develop generative AI prototypes to address real-world business challenges in the public sector, financial services, energy, healthcare and other industries. Over the course of several weeks, cross-functional teams comprising client teams, IBM and AWS representatives worked to design, develop and iterate on prototypes that push the boundaries of what's possible with generative AI. IBM used design thinking and user-centric approach to guide the teams throughout the hackathon. AWS provided…

IBM + AWS: Transforming Software Development Lifecycle (SDLC) with generative AI

7 min read - Generative AI is not only changing the way applications are built, but the way they are envisioned, designed, tested, documented, and deployed. It’s also revolutionizing the software development lifecycle (SDLC). IBM and AWS are infusing Amazon Bedrock generative AI capabilities into the IBM® SDLC solution to drive increased efficiency, speed, quality and value in every application lifecycle consistently and at scale. The evolution of the SDLC landscape The software development lifecycle has undergone several silent revolutions in recent decades. The…

How digital solutions increase efficiency in warehouse management

3 min read - In the evolving landscape of modern business, the significance of robust operational and maintenance systems cannot be overstated. Efficient warehouse management helps businesses to operate seamlessly, ensure precision and drive productivity to new heights. In our increasingly digital world, bar coding stands out as a cornerstone technology, revolutionizing warehouses by enabling meticulous data tracking and streamlined workflows. With this knowledge, A3J Group is focused on using IBM® Maximo® Application Suite and the Red Hat® Marketplace to help bring inventory solutions…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters