Cloud

Install Red Hat’s cloud native storage in a disconnected OpenShift cluster in three steps

Veröffentliche eine Notiz:

With the advent of big data and data emerging from a variety of places and devices, a platform agnostic storage system is more important than ever to flexibly handle the growing amounts of data and analyses thereof while being highly scalable and secure. Adopting a software-defined storage solution for containers enables teams to not only develop and deploy applications quickly and efficiently across clouds, but also support the use of artificial intelligence technologies to analyze all this data and derive its maximum value.

Red Hat OpenShift Data Foundation (ODF), previously Red Hat OpenShift Container Storage, is a software-defined, container-native storage solution that is integrated with the OpenShift Container Platform (OCP). Running as a Kubernetes service, OpenShift Data Foundation provides cluster data management services for containerized workloads on any infrastructure – from bare-metal servers to VMware VMs to hybrid and multi-cloud environments. OpenShift Data Foundation can also be decoupled and managed as a separate, independently scalable data store. It can scale with the Kubernetes cluster thus simplifying management compared with an external storage system.

A disconnected, restricted, or air-gapped network is a security measure to ensure that a computer network is physically isolated from insecure networks, like the Internet or insecure local networks. Because the network is offline or does not have full Internet access, air-gapped environments can keep critical systems and sensitive information safe from potential data theft or security breaches and reduce the risk of malicious attacks.

First, as a disconnected cluster does not have an active connection to the internet, the installation media or contents of container registries need to be synchronized or mirrored to a mirror registry on a host that can access both the internet and your closed network, as you will need to copy images, operators and/ or artifacts to a device that you can move across network boundaries. For OpenShift versions >=4.11 it is recommended that you use the oc mirror tool for a simplified mirroring process and lifecycle management. For OpenShift versions <=4.10 you can use oc commands for the mirroring process.

Second, once the mirroring process is completed and after making storage physically or virtually available to OpenShift Data Foundation, you need to install the Local Storage and OpenShift Data Foundation operators (including dependencies). An operator is a method of packaging, deploying and managing a Kubernetes-native application.

Third, the OpenShift Data Foundation storage system needs to be created and deployed. More specifically, you need to tell OpenShift what physical or virtual storage shall be used to create the logical storage system and how to you want to make it available to the cluster. This completes the installation and enables you start taking advantage of the benefits of OpenShift Data Foundation.

You’d like to learn more?

Andrea Müller-Hansen and Chris Schneider, Technology Engineers in the Client Engineering team wrote a step-by-step guide on how to install OpenShift Data Foundation using local storage devices on a disconnected VMware cluster. This step-by-step tutorial / guide includes:

  • Prerequisites for installing ODF
  • How to mirror the contents needed for the installation
  • How to install the operators needed for running ODF
  • How to configure and deploy ODF

Check out the article to learn more about the steps it takes to install ODF in a disconnected environment [https://www.opensourcerers.org/2022/11/14/how-to-install-openshift-data-foundation-odf-4-10-in-a-disconnected-or-air-gapped-vmware-cluster]

Andrea Müller-Hansen

Technology Engineer, IBM Deutschland

More stories
By Chris Schneider and Andrea Müller-Hansen on November 28, 2022

Install Red Hat’s cloud native storage in a disconnected OpenShift cluster in three steps

With the advent of big data and data emerging from a variety of places and devices, a platform agnostic storage system is more important than ever to flexibly handle the growing amounts of data and analyses thereof while being highly scalable and secure. Adopting a software-defined storage solution for containers enables teams to not only […]

Weiterlesen

By Otto Schell on März 23, 2022

BREAKTHROUGH with IBM for RISE with SAP: Durchbruch für die Cloud

Die volatilen, geradezu unberechenbaren Märkte rütteln an etablierten Business-Modellen: Agilität und Geschwindigkeit bestimmen das Geschäft. Doch mit Transformationstrends wie einer strikten Kunden- und Serviceorientierung steigt nicht nur der Bedarf an Know-how. Auch für zugrundeliegende IT-Architekturen und -Prozesse ist ein neues Zeitalter angebrochen.   Geschwindigkeit bestimmt das Geschäft  Viele Jahre galten klassische Business-Case-Optimierungen als hinreichend, um auf […]

Weiterlesen

By Heiko Lenzing on März 14, 2022

Daten sind überall. Bezwingen Sie das Chaos!

Daten sind überall. Bezwingen Sie das Chaos!Unternehmensdaten mit IBM DataOps nutzbar machen. Die kostenfreie Webinar-Reihe in 6 Folgen – OnDemand für Sie! Unternehmen verfügen heute über umfassende Datenbestände, die es zu beherrschen gilt. Diese Daten sicher und verlässlich nutzbar zu machen, um damit Wertschöpfung zu erzielen, ist das Ziel. In dieser Webinar-Reihe zeigen wir, wie […]

Weiterlesen