Hardware requirements

The following sections describe hardware requirements to consider to deploy an IBM Storage Scale container native.

Network

Worker node requirements

IBM Storage Scale container native supports x86_64, ppc64le, and s390x CPU architectures. All nodes in the Red Hat OpenShift cluster must have the same architecture. The ARM architecture is not supported.

Starting with IBM Storage Scale container native v5.1.9, the UBI 9 base image is used and it is based on RHEL 9. And as RHEL 9 no longer supports Power8, IBM Storage Scale container native also no longer supports Power8. For more information, see Minimum IBM Power requirements.

IBM Storage Scale container native deploys several pods in the cluster. The following table shows the resource consumption of those pods.

Table 1. Hardware requirements
Pod Where deployed CPU request Memory request Storage Notes
core (created with the k8s Node shortname) Nodes that are labeled with nodeSelector in cluster CR Minimum 1000mCPU. The sample CR sets 2000mCPU. Minimum 4GiB (2 GiB on s390). The sample CR sets 4GiB for client role and 8GiB for storage role. config in /var (~25 GiB) This is the pod that provides the file system service for the node. It must be deployed on all nodes where PVs are accessed from application pods. The CPU and memory requests and limits can be customized in the cluster CR.
operator Single Node 500mCPU 200 MiB The controller runtime that manages all custom resources
gui Two Nodes 630mCPU 1.25 GiB Local PV for DB The graphical user interface and ReST API
pmcollector Two Nodes 120 mCPU 3-7GiB depending on cluster size Local PV for DB The performance collector database
grafana-bridge Single Node 100mCPU 1GiB The bridge for accessing pmcollector from grafana

The shown values are requests. For more information, see Kubernetes resource management. The limits are higher. This means that for CPU the pods might have bursts with more CPU usage at times where the CPU has free cycles. For memory the pods should not exceed their request significantly. By default the core pods take 25% of the worker node capacity. 25% might be oversized in many applications. For more information about configuring the requests for both CPU and memory, see Cluster custom resource.

* Allocating more resources to IBM Storage Scale results in better storage performance.
* Allocating less allows more applications to be scheduled on a node.

For CPU, allocation can be reduced if the core fs pods consistently stay under the request. The CPU allocation can be monitored on the Red Hat OpenShift console. When going too low the file system daemon might starve on CPU cycles that destabilizes the whole cluster and can result in outages. For memory, there is no real monitoring, allocating more results in more data is cached which can boost performance. But this will be only seen indirectly by observing application performance.

The CPU request can be reduced under the 1000mCPU minimum. Your system might run fine with, for example, a 100mCPU. But, if a service ticket is opened for an issue that might be in any way that is related to this setting you will be asked to go up to 1000mCPU. The ticket is accepted only if the problem keeps showing up. Examples for related issues are, node expels, lag on PV creation in CSI, slow policy runs, bad performance, long servers, and so on. The Red Hat OpenShift console reports all worker nodes as overcommitted. The reason is that the CPU and memory limits of the pods add up to more than then total capacity of the node. This is normal and no reason for concern. Pods are scheduled based on their requests and the scheduler ensures that nodes will not be overcommitted in this regard. Higher limits allow pods to use resources that are free at the moment, but only the requested resources are guaranteed to them by Kubernetes. For more information about pod scheduling, see Kubernetes resource management

Pod memory request and limits

The following table shows the memory requests and limits configuration for each pod created.

Table 2. Pod memory requests and limits
Namespace Pod name Container Memory request \ limit CPU request \ limit
ibm-spectrum-scale Grafana Bridge grafanabridge 1000Mi \ 4000Mi 100 m \ 500 m
ibm-spectrum-scale GUI liberty 750Mi \ 1000Mi 500 m \ 2
ibm-spectrum-scale GUI postgres 250Mi \ 500Mi 100 m \ 1
ibm-spectrum-scale GUI sysmon 200Mi \ 500Mi 20 m \ 100 m
ibm-spectrum-scale GUI logs 20Mi \ 60Mi 10 m \ 100 m
ibm-spectrum-scale PM Collector pmcollector 5000Mi \ 10000Mi 100 m \ 500 m
ibm-spectrum-scale PM Collector sysmon 200Mi \ 500Mi 20 m \ 100 m
ibm-spectrum-scale Core gpfs 4Gi \ 8Gi1 2 \ 42
8Gi \ 16Gi3 2 \ 44
ibm-spectrum-scale Core logs 20Mi \ 60Mi 10 m \ 100 m
ibm-spectrum-scale-operator controller-manager manager 200Mi \ 400Mi 500 m \ 1500 m
ibm-spectrum-scale-dns CoreDNS coredns 70Mi \ 140Mi 50 m \ 100 m

For more information, see Requests and limits in the Kubernetes documentation.

Local file system

This feature is available as a technology preview. Technology preview features are not supported for use within production environments. Use with nonproduction workloads, in demo or proof of concept environments only. IBM production service level agreements (SLAs) are not supported. Technology preview features can not be functionally complete. The purpose of technology preview features is to give access to new and exciting technologies, enabling customers to test functionality and provide feedback during the development process. The existence of a technology preview feature does not mean that a future release is guaranteed. Feedback is welcome and encouraged.

The Red Hat OpenShift nodes can provide disks or volumes to be used as local storage for creating PVs. These disks or volumes serve as storage for a local file system. A disk or volume can be a single drive, a partition of a single drive, or a volume from a RAID controller. Disks must be attached to the Red Hat OpenShift nodes as a shared nothing configuration. Shared disks or SANs are not supported.

Red Hat OpenShift nodes that provide disks or volumes are called storage nodes. At least 3 storage nodes must exist.

The local file system uses 3-way replication, which causes the amount of usable storage to be a third of the total capacity of the disks or volumes. Each data block is written to three disks that are located in different failure groups. Per default, a failure group consists of the disks or volumes of one storage node. If Kubernetes zones are defined for the storage nodes, a separate failure group is assigned to all nodes within a zone. In this case, at least 3 failure zones must exist.

Having equal disk or volume capacity within each zone allows optimal usage of the storage. Having disks or volumes of the same size and performance characteristics is beneficial.

Disks or volumes with a maximum capacity of 4 TiB are supported. VMware5 thin provisioned virtual disks are not supported. For more information, see the "Disk Questions" section in IBM Storage Scale FAQ.


  1. The memory limit is set to double of request memory by default. It can be set to another custom value into the Cluster custom resource. The new custom value must be at least twice as the memory request value.

  2. The CPU limit is set to double of the CPUs of request by default. This can also be modified into the Cluster custom resource, but the new custom value must be at least twice as the CPU requested value.

  3. For core pods with the storage role we request 8Gi of memory (instead of only 4Gi as requested for core pods with the client role). The storage role is used for core pods that run on Red Hat OpenShift nodes that provide disks or volumes for local file systems.

  4. For core pods with the storage role, the CPU limit is set to double of the CPUs of request by default. The storage role is used for core pods that run on Red Hat OpenShift nodes that provide disks or volumes for local file systems.

  5. VMware, the VMware logo, VMware Cloud Foundation, VMware Cloud Foundation Service, VMware vCenter Server, and VMware vSphere are registered trademarks or trademarks of VMware, Inc. or its subsidiaries in the United States and/or other jurisdictions.