Storage considerations

To install IBM Cloud Pak for Data, you must have a supported persistent storage solution that is accessible to your Red Hat® OpenShift® cluster.

What storage options are supported for the platform?

Cloud Pak for Data supports and is optimized for several types of persistent storage.

Cloud Pak for Data uses dynamic storage provisioning. A Red Hat OpenShift cluster administrator must properly configure storage before Cloud Pak for Data is installed.

Important: It is your responsibility to review the documentation for the storage that you plan to use. Ensure that you understand any limitations that are associated with the storage.

As you plan your installation, remember that not all services support all types of storage. For complete information on the storage that each service supports, see Storage requirements. If the services that you want to install don't support the same type of storage, you can have a mixture of different storage types on your cluster. However, it is recommended to use one type of storage, if possible.

Storage option Version Notes
OpenShift Data Foundation
  • Version 4.12 or later fixes
  • Version 4.14 or later fixes
  • Version 4.15 or later fixes
Available in Red Hat OpenShift Platform Plus.

Ensure that you install a version of OpenShift Data Foundation that is compatible with the version of Red Hat OpenShift Container Platform that you are running. For details, see https://access.redhat.com/articles/4731161.

IBM Storage Fusion Data Foundation
  • Version 2.7.2 with the latest hotfix or later fixes
  • Version 2.8.0 or later fixes
Available in IBM Storage Fusion.

Ensure that you install a version of IBM Storage Fusion Data Foundation that is compatible with the version of Red Hat OpenShift Container Platform that you are running.

If you are upgrading to IBM Cloud Pak for Data Version 5.0, upgrade your storage after you upgrade IBM Cloud Pak for Data.

IBM Storage Fusion Global Data Platform
  • Version 2.7.2 with the latest hotfix or later fixes
  • Version 2.8.0 or later fixes
Available in IBM Storage Fusion or IBM Storage Fusion HCI System.

If you are upgrading to IBM Cloud Pak for Data Version 5.0, upgrade your storage after you upgrade IBM Cloud Pak for Data.

IBM Storage Scale Container Native (with IBM Storage Scale Container Storage Interface) Version 5.1.7 or later fixes, with CSI Version 2.9.0 or later fixes Available in the following storage:
  • IBM Storage Fusion
  • IBM Storage Suite for IBM Cloud Paks
Portworx
  • Version 2.13.3 or later fixes
  • Version 3.0.2 or later fixes
If you are running Red Hat OpenShift Container Platform Version 4.12, you must use Portworx Version 2.13.3 or later.

If you are running Red Hat OpenShift Container Platform Version 4.14, you must use Portworx Version 3.0.2 or later.

NFS Version 3 or 4
Version 3 is recommended if you are using any of the following services:
  • Data Product Hub
  • Data Virtualization
  • DataStage
  • Db2
  • Db2 Big SQL
  • Db2 Warehouse
  • IBM Knowledge Catalog
  • IBM Knowledge Catalog Premium
  • IBM Knowledge Catalog Standard
  • OpenPages

If you use Version 4, ensure that your storage class uses NFS Version 3 as the mount option. For details, see Setting up dynamic provisioning.

Amazon Elastic Block Store (EBS) Not applicable In addition to EBS storage, your environment must also include EFS storage.
Amazon Elastic File System (EFS) Not applicable It is recommended that you use both EBS and EFS storage.
NetApp Trident Version 23.07 or later fixes This information applies to both self-managed and managed NetApp Trident.
Note: The preceding storage options have been evaluated by IBM. However, you should run the Cloud Pak for Data storage validation tool on your Red Hat OpenShift cluster to:
  • Evaluate whether the storage on your cluster is sufficient for use with Cloud Pak for Data.
  • Assess storage provided by other vendors. This tool does not guarantee support for other types of storage. You can use other storage environments at your own risk.

What storage options are supported on my deployment environment?

If Cloud Pak for Data supports a storage option, you can install Cloud Pak for Data with that storage if it is supported on your deployment option. Ensure that you select a storage option that:
  • Works on your chosen deployment environment.

    Some storage options are supported only on a specific deployment environment.

    Best practice: For clusters hosted on third-party infrastructure, such as IBM Cloud or Amazon Web Services, it is recommended that you use storage that is native to the infrastructure or well integrated with the infrastructure, if possible.
  • Supports the services that you plan to install.

    Some services support a subset of the storage options that are supported by the platform. For details, see Storage requirements.

    Has sufficient I/O performance.

    For information on how to test I/O performance, see Disk requirements.

Deployment environment Managed OpenShift Self-managed OpenShift
On-premises IBM Cloud Satellite supports the following storage options with managed OpenShift:
  • OpenShift Data Foundation
  • Portworx
The following storage options are supported on bare metal and VMware infrastructure with self-managed OpenShift:
  • OpenShift Data Foundation
  • IBM Storage Fusion Data Foundation
  • IBM Storage Fusion Global Data Platform
  • IBM Storage Scale Container Native
  • Portworx
  • NFS
  • NetApp Trident
IBM Cloud Red Hat OpenShift on IBM Cloud supports the following storage options on VPC infrastructure:
  • OpenShift Data Foundation
  • IBM Storage Fusion Data Foundation
  • Portworx
The following storage options are supported on VPC IBM Cloud infrastructure with self-managed OpenShift:
  • OpenShift Data Foundation
  • IBM Storage Fusion Data Foundation
  • Portworx
  • NFS
Amazon Web Services (AWS) Red Hat OpenShift Service on AWS (ROSA) supports the following storage options:
  • IBM Storage Fusion Global Data Platform
  • Amazon Elastic Block Store (EBS)
  • Amazon Elastic File System (EFS)
  • NetApp Trident (includes Amazon FSx for NetApp ONTAP)
The following storage options are supported on AWS infrastructure with self-managed OpenShift:
  • OpenShift Data Foundation
  • IBM Storage Fusion Data Foundation
  • Amazon Elastic Block Store (EBS)
  • Amazon Elastic File System (EFS)
  • Portworx
  • NFS
  • NetApp Trident (includes Amazon FSx for NetApp ONTAP)
Microsoft Azure Azure Red Hat OpenShift (ARO) supports the following storage options:
  • OpenShift Data Foundation
  • IBM Storage Fusion Data Foundation
  • Portworx
  • NFS
The following storage options are supported on Microsoft Azure infrastructure with self-managed OpenShift:
  • OpenShift Data Foundation
  • IBM Storage Fusion Data Foundation
  • Portworx
  • NFS, specifically Microsoft Azure locally redundant Premium SSD storage
Google Cloud Red Hat OpenShift Dedicated on Google Cloud supports the following storage options:
  • OpenShift Data Foundation
  • IBM Storage Fusion Data Foundation
The following storage options are supported on Google Cloud infrastructure with self-managed OpenShift:
  • OpenShift Data Foundation
  • IBM Storage Fusion Data Foundation
  • Portworx
  • NFS

What storage options are supported on the version of Red Hat OpenShift Container Platform that I am running?

Storage option Version 4.12 Version 4.14 Version 4.15
OpenShift Data Foundation
IBM Storage Fusion Data Foundation
IBM Storage Fusion Global Data Platform
IBM Storage Scale Container Native
Portworx
NFS
Amazon Elastic Block Store (EBS)
Amazon Elastic File System (EFS)
NetApp Trident

What storage options are supported on my hardware?

Storage option x86-64 Power s390x
OpenShift Data Foundation
IBM Storage Fusion Data Foundation
IBM Storage Fusion Global Data Platform
IBM Storage Scale Container Native
Portworx    
NFS
Amazon Elastic Block Store (EBS)    
Amazon Elastic File System (EFS)    
NetApp Trident    

Storage comparison

Use the following information to decide which storage solution is right for you:


License requirements

The following table lists whether you need a separate license to use each storage option.

Storage option Details
OpenShift Data Foundation

IBM Cloud Pak for Data customers can obtain OpenShift Data Foundation Essentials storage entitlement at no charge.

Entitlement terms

You are entitled to up to 12 TB of IBM Storage Fusion Data Foundation per cluster in internal deployment mode. If you exceed this amount, a separate license is required. There is no time usage limit, and this entitlement is supported by IBM.

IBM Storage Fusion Data Foundation

IBM Cloud Pak for Data customers can obtain IBM Storage Fusion storage entitlement at no charge.

Entitlement terms

You are entitled to up to 12 TB of IBM Storage Fusion Data Foundation per cluster in internal deployment mode. If you exceed this amount, a separate license is required. There is no time usage limit, and this entitlement is supported by IBM.

IBM Storage Fusion Global Data Platform

IBM Cloud Pak for Data customers can obtain IBM Storage Fusion storage entitlement at no charge.

Entitlement terms

You are entitled to up to 12 TB of IBM Storage Fusion Data Foundation per cluster in internal deployment mode. If you exceed this amount, a separate license is required. There is no time usage limit, and this entitlement is supported by IBM.

IBM Storage Scale Container Native (with IBM Storage Scale Container Storage Interface) You can use IBM Storage Scale Container Native as part of IBM Storage Fusion.
Portworx A separate license is required.
NFS No license is required.
Amazon Elastic Block Store (EBS) A separate subscription is required.
Amazon Elastic File System (EFS) A separate subscription is required.
NetApp Trident
Self-managed NetApp Trident
A separate license is required.
Amazon FSx for NetApp ONTAP
A separate subscription is required.


Storage classes

The person who installs Cloud Pak for Data and the services on the cluster must know which storage classes to use during installation. The following table lists the required types of storage. When applicable, the table also lists the recommended storage classes to use and points to additional guidance on how to create the storage classes.

Storage option Details
OpenShift Data Foundation The recommended storage classes are automatically created when you install OpenShift Data Foundation.
Cloud Pak for Data uses the following storage classes:
  • RWX file storage: ocs-storagecluster-cephfs
  • RWO block storage: ocs-storagecluster-ceph-rbd
IBM Storage Fusion Data Foundation The recommended storage classes are automatically created when you install IBM Storage Fusion Data Foundation.
Cloud Pak for Data uses the following storage classes:
  • RWX file storage: ocs-storagecluster-cephfs
  • RWO block storage: ocs-storagecluster-ceph-rbd
IBM Storage Fusion Global Data Platform The recommended storage class name depends on your environment:
  • If you are using IBM Storage Fusion, the recommended RWX storage class is called ibm-spectrum-scale-sc.
  • If you are using IBM Storage Fusion HCI System, the recommended storage class is called ibm-storage-fusion-cp-sc.

IBM Storage Fusion Global Data Platform supports both ReadWriteMany (RWX access) and ReadWriteOnce (RWO access) with the same storage class.

IBM Storage Scale Container Native (with IBM Storage Scale Container Storage Interface) The recommended RWX storage class is called ibm-spectrum-scale-sc.

IBM Storage Scale Container Native supports both ReadWriteMany (RWX access) and ReadWriteOnce (RWO access) with the same storage class.

For details on creating the recommended storage class, see Setting up IBM Storage Fusion Global Data Platform or IBM Storage Scale Container Native storage.

Portworx The recommended storage classes are listed in Creating Portworx storage classes.
NFS The recommended RWX storage class is called managed-nfs-storage. For details on setting up dynamic provisioning and creating the recommended storage class, see Setting up NFS storage.
Amazon Elastic Block Store (EBS) Use either of the following RWO storage classes:
  • gp2-csi
  • gp3-csi
Amazon Elastic File System (EFS) The recommended RWX storage class is called efs-nfs-client. For details on setting up dynamic storage provisioning and creating the recommended storage class, see Setting up Amazon Elastic File System.
NetApp Trident
Self-managed NetApp Trident
The recommended RWX storage class is called ontap-nas. For details on setting up dynamic provisioning and creating the recommended storage class, see Setting up NetApp Trident.
Amazon FSx for NetApp ONTAP
The requirements are the same as self-managed NetApp Trident.


Data replication for high availability
Storage option Details
OpenShift Data Foundation Supported

By default, all services use multiple replicas for high availability. OpenShift Data Foundation maintains each replica in a distinct availability zone.

IBM Storage Fusion Data Foundation Supported

By default, all services use multiple replicas for high availability. IBM Storage Fusion Data Foundation maintains each replica in a distinct availability zone.

IBM Storage Fusion Global Data Platform Supported.

Replication is supported and can be enabled within the IBM Storage Fusion Global Data Platform in various ways, see Data Mirroring and Replication in the IBM Storage Scale documentation.

IBM Storage Scale Container Native (with IBM Storage Scale Container Storage Interface) Supported.

Replication is supported and can be enabled within the IBM Storage Scale Storage Cluster in various ways, see Data Mirroring and Replication in the IBM Storage Scale documentation.

Portworx Supported

By default, most services use a storage class that supports 3 replicas.

For details about the replicas for each storage class, see Creating Portworx storage classes.

For details about the storage classes required for each service, see Storage requirements.

NFS Replication support depends on your NFS server.
Amazon Elastic Block Store (EBS) Supported

When you create an EBS volume, it is automatically replicated within its Availability Zone to prevent data loss due to failure of any single hardware component.

Amazon Elastic File System (EFS) Supported

You can use EFS replication to create a replica of your EFS file system in the AWS Region of your choice. When you enable replication on an EFS file system, Amazon EFS automatically and transparently replicates the data and metadata on the source file system to the target file system. For details, see Amazon EFS replication.

NetApp Trident
Self-managed NetApp Trident

Supported

For details, see Data replication requirements in the NetApp Trident documentation.

Amazon FSx for NetApp ONTAP
The requirements are the same as self-managed NetApp Trident.


Storage-level backup and restore

Storage-level backup and restore does not include backup and restore of Cloud Pak for Data deployments.

Storage option Details
OpenShift Data Foundation Container Storage Interface support for snapshots and clones. For more

Tight integration with Velero CSI plug-in for Red Hat OpenShift Container Platform backup and recovery.

IBM Storage Fusion Data Foundation Container Storage Interface support for snapshots and clones.

Tight integration with Velero CSI plug-in for Red Hat OpenShift Container Platform backup and recovery.

IBM Storage Fusion Global Data Platform
For storage level backup, see Backing up and restoring your data in the IBM Storage Fusion documentation.
IBM Storage Scale Container Native (with IBM Storage Scale Container Storage Interface) For details, see Data protection and disaster recovery in the IBM Storage Scale documentation.
Portworx
On-premises
Limited support.
IBM Cloud
Supported with the Portworx Enterprise Disaster Recovery plan.
NFS Limited support.
Amazon Elastic Block Store (EBS)
Amazon Elastic File System (EFS)
NetApp Trident


Cloud Pak for Data backup and restore

Cloud Pak for Data backup and restore is applicable to application-level backups and does not include backing up data on the storage device.

Storage
OADP
Offline backup and restore
Online backup and restore to the same cluster
Online backup and restore to different cluster
Disaster recovery
Red Hat OpenShift Data Foundation
with
  • OADP
  • IBM Storage Fusion

with IBM Storage Fusion

 
IBM Storage Fusion Data Foundation  
with
  • OADP
  • IBM Storage Fusion

with IBM Storage Fusion

 
IBM Storage Fusion Global Data Platform  
with
  • OADP
  • IBM Storage Fusion

with IBM Storage Fusion

 
IBM Storage Scale Container Native (with IBM Storage Scale Container Storage Interface)  
with
  • OADP
  • IBM Storage Fusion

with IBM Storage Fusion

 
Portworx  
with
  • OADP
  • Portworx backup and restore (requires Portworx v2.12.2 or higher

with Portworx disaster recovery (asynchronous data replication)

with Portworx disaster recovery (asynchronous data replication)

NetApp Trident  
with
  • OADP
  • NetApp Astra Control Center

with NetApp Astra Control Center

 
NFS

Restic backups only

     
Amazon Elastic File System

Restore to same cluster with Restic backups only

     
Amazon Elastic File System and Amazon Elastic Block Store

Restore to same cluster with Restic backups only

     
Amazon FSx for NetApp ONTAP  
with
  • OADP
  • NetApp Astra Control Center

with NetApp Astra Control Center

 


Encryption of data at rest
Storage option Details
OpenShift Data Foundation Supported.

OpenShift Data Foundation uses Linux Unified Key System (LUKS) version 2 based encryption with a key size of 512 bits and the aes-xts-plain64 cipher.

You must enable encryption for your whole cluster during cluster deployment to ensure encryption of data at rest. Encryption is disabled by default. Working with encrypted data incurs a small performance penalty. For more information, see Cluster-wide encryption in the OpenShift Data Foundation documentation:
You can also encrypt persistent volume in addition to enabling encryption for the whole cluster. You can enable persistent volume encryption for block storage only. For more information, see Storage class encryption in the OpenShift Data Foundation documentation:
Support for FIPS cryptography
By storing all data in volumes that use RHEL-provided disk encryption and enabling FIPS mode for your cluster, both data at rest and data in motion, or network data, are protected by FIPS Validated Modules in Process encryption. You can configure your cluster to encrypt the root file system of each node. For more information, see FIPS-140-2 in the OpenShift Data Foundation documentation:
 
IBM Storage Fusion Data Foundation Supported.

IBM Storage Fusion Data Foundation uses Linux Unified Key System (LUKS) version 2 based encryption with a key size of 512 bits and the aes-xts-plain64 cipher.

You must enable encryption for your whole cluster during cluster deployment to ensure encryption of data at rest. Encryption is disabled by default. Working with encrypted data incurs a small performance penalty. For details, see Security considerations in the IBM Storage Fusion documentation:
Support for FIPS cryptography
By storing all data in volumes that use RHEL-provided disk encryption and enabling FIPS mode for your cluster, both data at rest and data in motion, or network data, are protected by FIPS Validated Modules in Process encryption. You can configure your cluster to encrypt the root file system of each node. For details, see FIPS-140-2 in the IBM Storage Fusion documentation:
 
IBM Storage Fusion Global Data Platform Supported

For details, see Encryption in the IBM Storage Scale documentation.

IBM Storage Scale Container Native (with IBM Storage Scale Container Storage Interface) Supported

For details, see Encryption in the IBM Storage Scale documentation.

Portworx Supported with Portworx Enterprise only.

Portworx uses the LUKS format of dm-crypt and AES-256 as the cipher with xts-plain64 as the cipher mode.

On-premises deployments
Refer to Enabling Portworx volume encryption in the Portworx documentation.
IBM Cloud deployments
To protect the data in your Portworx volumes, encrypt the volumes with IBM Key Protect or Hyper Protect Crypto Services.
NFS Check with your storage vendor on the steps to enable encryption of data at rest.
Amazon Elastic Block Store (EBS)
Amazon Elastic File System (EFS)
NetApp Trident
Self-managed NetApp Trident

Supported

For details, see Encryption of data at rest in the NetApp Trident documentation.

Amazon FSx for NetApp ONTAP
The requirements are the same as self-managed NetApp Trident.


Network and I/O requirements
Storage option Details
OpenShift Data Foundation
Network requirements
Your network must support a minimum of 10 Gbps.
I/O requirements
Each node must have at least one enterprise-grade SSD or NVMe device that meets the Disk requirements in the system requirements.

If SSD or NVMe aren't supported in your deployment environment, use an equivalent or better device.

IBM Storage Fusion Data Foundation
Network requirements
Your network must support a minimum of 10 Gbps.
I/O requirements
Each node must have at least one enterprise-grade SSD or NVMe device that meets the Disk requirements in the system requirements.

If SSD or NVMe aren't supported in your deployment environment, use an equivalent or better device.

IBM Storage Fusion Global Data Platform
Network requirements
You must have sufficient network performance to meet the storage I/O requirements.
I/O requirements
For details, see Disk requirements in the system requirements.
IBM Storage Scale Container Native (with IBM Storage Scale Container Storage Interface)
Network requirements
You must have sufficient network performance to meet the storage I/O requirements.
I/O requirements
For details, see Disk requirements in the system requirements.
Portworx
Network requirements
Your network must support a minimum of 10 Gbps.

For details, see Prerequisites in the Portworx documentation.

I/O requirements
For details, see Disk requirements in the system requirements.

For details on performance, see FIO performance in the Portworx documentation.

NFS
Network requirements
You must have sufficient network performance to meet the storage I/O requirements.
I/O requirements
For details, see Disk requirements in the system requirements.
Amazon Elastic Block Store (EBS)
Network requirements
You must have sufficient network performance to meet the storage I/O requirements.
I/O requirements
For details, see Disk requirements in the system requirements.
Amazon Elastic File System (EFS)
Network requirements
You must have sufficient network performance to meet the storage I/O requirements.
I/O requirements
For details, see Disk requirements in the system requirements.
NetApp Trident
Network requirements
Self-managed NetApp Trident
You must have sufficient network performance to meet the storage I/O requirements.
Amazon FSx for NetApp ONTAP
The requirements are the same as self-managed NetApp Trident.
I/O requirements
Self-managed NetApp Trident
For details, see Disk requirements in the system requirements.
Amazon FSx for NetApp ONTAP
The requirements are the same as self-managed NetApp Trident.


Resource requirements

This section describes the resource requirements for the various storage options.

For information about the minimum amount of storage that is required for your environment, see Storage requirements.

Important: Work with your IBM® Sales representative to ensure that you have sufficient storage for the services that you plan to run on Cloud Pak for Data and for your expected workload.
Storage option vCPU Memory Storage
OpenShift Data Foundation
  • 10 vCPU per node on three initial nodes.
  • 2 vCPU per node on any additional nodes
For details, see Resource requirements:
  • 24 GB of RAM per node on initial three nodes.
  • 5 GB of RAM on any additional nodes.
For details, see Resource requirements:
A minimum of three nodes.

On each node, you must have at least one SSD or NVMe device. Each device should have at least 1 TB of available storage.

For details, see Resource requirements:
IBM Storage Fusion Data Foundation
  • 10 vCPU per node on three initial nodes.
  • 2 vCPU per node on any additional nodes
For details, see System requirements:
  • 24 GB of RAM on initial three nodes.
  • 5 GB of RAM on any additional nodes.
For details, see System requirements:
A minimum of three nodes.

On each node, you must have at least one SSD or NVMe device. Each device should have at least 1 TB of available storage.

For details, see System requirements:
IBM Storage Fusion Global Data Platform 8 vCPU on each worker node to deploy IBM Storage Fusion Global Data Platform.
For details, see System requirements:
16 GB of RAM on each worker node.
For details, see System requirements:
1 TB or more of available space.
For details, see System requirements:
IBM Storage Scale Container Native (with IBM Storage Scale Container Storage Interface) 8 vCPU on each worker node to deploy IBM Storage Scale Container Native and IBM Storage Scale Container Storage Interface Driver.

See Hardware requirementsin the IBM Storage Scale Container Native documentation.

16 GB of RAM on each worker node.

See Hardware requirementsin the IBM Storage Scale Container Native documentation.

1 TB or more of available space.

See Hardware requirementsin the IBM Storage Scale Container Native documentation.

Portworx
On-premises
4 vCPU on each storage node
IBM Cloud
For details see the following sections of Storing data on software-defined-storage (SDS) with Portworx:
  • What worker node flavor in Red Hat OpenShift on IBM Cloud is the right one for Portworx?
  • What if I want to run Portworx in a classic cluster with non-SDS worker nodes?
4 GB of RAM on each storage node A minimum of three storage nodes.
On each storage node, you must have:
  • A minimum of 1 TB of raw, unformatted disk
  • An additional 100 GB of raw, unformatted disk for a key-value database.
NFS 8 vCPU on the NFS server 32 GB of RAM on the NFS server 1 TB or more of available space
Amazon Elastic Block Store (EBS)
Amazon Elastic File System (EFS)
NetApp Trident
Self-managed NetApp Trident
Not applicable.
Amazon FSx for NetApp ONTAP
Not applicable.
Self-managed NetApp Trident
Not applicable.
Amazon FSx for NetApp ONTAP
Not applicable.
Self-managed NetApp Trident
1 TB or more of available space
Amazon FSx for NetApp ONTAP
The requirements are the same as self-managed NetApp Trident.

Additional documentation

Storage option Documentation links
OpenShift Data Foundation
Installation
See the Product Documentation for Red Hat OpenShift Data Foundation:
Cloud Pak for Data configuration guidance
For post-installation guidance, see Configuring persistent storage for IBM Cloud Pak for Data.
Troubleshooting
See Troubleshooting OpenShift Data Foundation in the OpenShift Data Foundation documentation:
IBM Storage Fusion Data Foundation
Installation
  1. To deploy IBM Storage Fusion, see Deploying IBM Storage Fusion in the IBM Storage Fusion documentation:
  2. To install the Data Foundation service, see IBM Storage Fusion Data Foundation in the IBM Storage Fusion documentation:
Cloud Pak for Data configuration guidance
For post-installation guidance, see Configuring persistent storage for IBM Cloud Pak for Data.
Troubleshooting
See Troubleshooting IBM Storage Fusion in the IBM Storage Fusion documentation:
IBM Storage Fusion Global Data Platform
Installation
  1. To deploy IBM Storage Fusion, see Deploying IBM Storage Fusion in the IBM Storage Fusion documentation:
  2. To install the Global Data Platform service, see IBM Storage Fusion Global Data Platform in the IBM Storage Fusion documentation:
Cloud Pak for Data configuration guidance
For post-installation guidance, see Configuring persistent storage for IBM Cloud Pak for Data.
Troubleshooting
See Troubleshooting IBM Storage Fusion in the IBM Storage Fusion documentation:
IBM Storage Scale Container Native (with IBM Storage Scale Container Storage Interface)
Installation
See Installing the IBM Storage Scale container native operator and cluster in the IBM Storage Scale Container Native documentation:
Cloud Pak for Data configuration guidance
For post-installation guidance, see Configuring persistent storage for IBM Cloud Pak for Data.
Troubleshooting
Portworx
Installation
See Install Portworx on OpenShift in the Portworx documentation.
Cloud Pak for Data configuration guidance
For post-installation guidance, see Configuring persistent storage for IBM Cloud Pak for Data.
Troubleshooting
See the product documentation for Troubleshoot Portworx on Kubernetes.
NFS
Installation
Refer to the installation documentation for your NFS storage provider.
Cloud Pak for Data configuration guidance
For post-installation guidance, see Configuring persistent storage for IBM Cloud Pak for Data.
Troubleshooting
Refer to the documentation from your NFS provider.
Amazon Elastic Block Store (EBS)
Installation
Managed OpenShift
EBS is provisioned by default when you install Red Hat OpenShift Service on AWS (ROSA).
Self-managed OpenShift
EBS is provisioned by default when you install Red Hat OpenShift Container Platform on AWS infrastructure.
Cloud Pak for Data configuration guidance
For post-installation guidance, see Configuring persistent storage for IBM Cloud Pak for Data.
Troubleshooting
See the AWS documentation.
Amazon Elastic File System (EFS)
Installation
Install EFS from the AWS Console. It is recommended that you create a regional file system. For details, see Getting started in the Amazon Elastic File System documentation.
Cloud Pak for Data configuration guidance
For post-installation guidance, see Configuring persistent storage for IBM Cloud Pak for Data.
Troubleshooting
See Troubleshooting Amazon EFS in the AWS documentation.
NetApp Trident
Installation
Self-managed NetApp Trident
See Learn about Astra Trident installation in the NetApp Astra Trident documentation.
Amazon FSx for NetApp ONTAP
Subscribe to the Amazon FSx for NetApp ONTAP service.
Use the following recommendations when you set up your Amazon FSx for NetApp ONTAP file system:
  • Use the Standard create option.
  • For high availability, you must use multi-AZ deployment type
  • The provisioned throughput should be 128 MB per second or higher.

For more information about creating the file system, see Step 1: Create an Amazon FSx for NetApp ONTAP file system.

Cloud Pak for Data configuration guidance
For post-installation guidance, see Configuring persistent storage for IBM Cloud Pak for Data.
Troubleshooting
See Troubleshooting in the product documentation.