Hardware requirements and recommendations
Review the minimum CPU, Memory, RAM, and disk space requirements for setting up and running IBM® Cloud Private clusters.
Note: Ensure that you review and verify that you meet the increased memory requirements. For more information, see the Hardware requirements section.
The following tables list the minimum system requirements per node for running IBM Cloud Private. The minimum requirement for IBM Cloud Private is one master (and proxy) node, one management node, and one worker node.
- Hardware requirements
- Disk space requirements
- PowerVM environment requirements
- Linux on IBM Z and LinuxONE environment requirements
Hardware requirements
Single node requirements
Requirement | All management services enabled | All management services including logging disabled |
---|---|---|
Number of hosts | 1 | 1 |
Cores | 8 or more | 8 or more |
CPU | >=2.4 GHz | >=2.4 GHz |
RAM | 32 GB or more | 16 GB or more |
Free disk space to install | >=200 GB | >=150 GB |
Note for CPUs:
- For a Linux® x86_64 cluster, use a CPU that supports SSE 4.2.
- For a Linux® on Power® (ppc64le) cluster, use a CPU that is version Power8 or higher.
- For a Linux® on IBM® Z and LinuxONE cluster, use a CPU that is either version EC12 or later or any LinuxONE system.
Multi-node requirements
Note: If you do not use a management node in your multi-node cluster, ensure that the master node meets the requirements of the management node plus the master node.
Requirement | Boot node | Master node | Proxy node | Worker node | Management node | VA node | Etcd node |
---|---|---|---|---|---|---|---|
Number of hosts | 1 | 1, 3, or 5 | 1 or more | 1 or more | 1 or more | 1, 3, or 5 | 1 or more odd number of nodes |
Cores | 1 or more | 8 or more | 2 or more | 2 or more | 8 or more | 8 or more | 1 or more |
CPU | >= 2.4 GHz | >= 2.4 GHz | >= 2.4 GHz | >= 2.4 GHz | >= 2.4 GHz | >= 2.4 GHz | >= 2.4 GHz |
RAM | >=4 GB | >=16 GB | >=4 GB | >=4 GB |
|
>=16 GB | >=4 GB |
Free disk space to install | >=100 GB |
|
|
>=150 GB | >=300 GB |
|
>=100 GB |
Notes:
-
For CPUs:
- For a Linux x86_64 cluster, use a CPU that supports SSE 4.2.
- For a Linux on Power (ppc64le) cluster, use a CPU that is version Power8 or higher.
- For a Linux on IBM Z and LinuxONE cluster, use a CPU that is either version EC12 or later or any LinuxONE system.
-
A virtual processor core (VPC) is a unit of measurement that is used to determine the licensing cost of IBM products. It is based on the number of virtual cores (vCPUs) that are available to the product. A vCPU is a virtual core that is assigned to a virtual machine or a physical processor core if the server isn’t partitioned for virtual machines. A vCPU is equivalent to a Kubernetes CPU. For more information, see Kubernetes Meaning of CPU .
-
If you disable logging or monitoring, or both, during installation, you can save some RAM and CPU. If you want to enable logging or monitoring, or both, see the sample deployment sizes in Sizing your cluster.
-
By default,
systemReserved
andkubeReserved
reserve 0.2 GHz of CPU processing and 512 MB of memory. You can reserve more resources to make the Kubernetes Platform more stable, especially on the Power platform. Remember: The additional reserved resource must be considered when you are planning your hardware requirement. -
For the Power platform, the following example contains the suggested values. See Reconfiguring Kubelet in a live cluster for the steps that are required for setting the value. For a single management node, which requires 16 GB of memory, you must expand your management host resource to use at least 20 GB of memory before reconfiguring Kubelet in a live cluster.
systemReserved: cpu: "500m" memory: "1500Mi" ephemeral-storage: "1Gi" kubeReserved: cpu: "500m" memory: "1500Mi" ephemeral-storage: "1Gi"
Disk space requirements
Installation-time disk space requirements
Location | Minimum disk space | Node | Description |
---|---|---|---|
Directory for placing offline images | 50 GB | Boot node | The directory is used for storing installation files. |
Directory for loading offline images | 100 GB | Boot node | The directory is used for loading the offline images by Docker |
Note: Installation-time disk space requirements are required for successful installation. In Installing IBM Cloud Private, Directory for placing offline images
is
/opt/ibm-cloud-private-3.1.2
and Directory for loading offline images
is the directory that you place the installation file.
Runtime disk space requirements
Location | Minimum disk space | Ideal disk space | Node |
---|---|---|---|
/ |
300 GB 200 GB 300 GB |
>=800 GB >=600 GB >=1000 GB |
Master and management Worker, proxy, and etcd VA |
/tmp/ |
50 GB | >=50 GB | All nodes |
/var/ |
200 GB 150 GB 200 GB |
>=700 GB >=550 GB >=900 GB |
Master and management Worker, proxy, and etcd VA |
/var/lib/docker |
100 GB | >=400 GB | All nodes |
/var/lib/etcd |
10 GB | >=20 GB | Master or etcd |
/var/lib/etcd-wal |
2 GB | >=4 GB | Master or etcd |
/var/lib/icp |
50 GB | >=150 GB | Master and management |
/var/lib/icp/va |
100 GB | >=350 GB | VA |
/var/lib/kubelet |
30 GB | >=150 GB | All nodes |
/var/lib/registry |
50 GB | >=50 GB | Master |
/var/log/cloudsight |
10 GB | >=10 GB | VA |
/var/lib/icp/logging |
25 GB | Calculated based on amount of logs kept | Management |
Important: The /var
directory is the default storage location for most Docker images and the containers that are used in your IBM Cloud Private cluster. The following directories are used by the installer but do not
require significant amounts of disk space:
/etc/cfc
- this directory stores the IBM Cloud Private configuration and certification key file./opt/ibm/cfc
- this directory stores the IBM Cloud Private license files.
To prevent disk space issues, mount the default storage directories on separate paths that have larger disk capacities. For more information about mounting the Docker storage directory (/var/lib/docker
), see Specifying a default Docker storage directory by using bind mount.
You can also use this bind method to mount the other IBM Cloud Private default storage directories. To prevent disk space issues in your cluster, you might want to use a bind mount to mount the following directories:
- Etcd -
/var/lib/etcd
- VA -
/var/lib/icp
- Kubelet service -
/var/lib/kubelet
For more information about mounting the default storage directories, see Specifying other default storage directories by using bind mount.
Notes:
- The disk space requirements that are mentioned in table 4 include the space of the subdirectories and can be reduced if subdirectories are located elsewhere.
- Minimum disk space is the minimum space for running. It is recommended to follow ideal disk space requirements in the production environment.
- If multiple cluster roles are installed on one node, the disk requirement is the sum of the disk requirement for each role. In the production environment, it is not recommended to install multiple cluster roles on one node.
- If the etcd node is separated, the
/var/lib/etcd
directory is on the etcd node. - In worker nodes, the
/var/lib/docker
directory requires more disk space because the production images are placed inside. - The
/var/lib/registry
directory is a shared mount from an external shared file system and needs at least 50 GB if the cluster is a mix cluster. It must be large enough to host all of the Docker images that you plan to store in your private image registry. - The
/var/lib/kubelet
directory needs at least 10 GB of disk space. If you enable the Vulnerability Advisor, the VA node needs >=100 GB of disk space. -
As etcd is sensitive to disk write latency, consider mounting the following directories to a dedicated disk:
/var/lib/etcd
/var/lib/etcd-wal
A reasonably fast disk can typically satisfy the disk speed requirements. As a best practice, you are recommended to use a solid-state drive (SSD). To check whether your disk speed is sufficient, you can use fio, a popular I/O tester. If you do plan to use fio, use fio version 3.5 of later. As an example, the following command uses fio to check the speed for the etcd directory /var/lib/etcd-wal:
fio --rw=write --ioengine=sync --fdatasync=1 --directory=/var/lib/etcd-wal --size=220m --bs=2300 --name=mytest
Check the 99th percentile of the
fdatasync
durations within the command output. The 99th percentile of thefdatasync
durations should be less than 10 ms. For more information, see Using fio to tell whether your storage is fast enough for etcd . For more information about etcd disk requirements, see etcd disks . - For the
/var/lib/icp/logging
directory, SSD drives are known to perform better than spinning disks. Always use local storage. Avoid remote filesystems such as NFS or SMB. For more information, see Elasticsearch reference .
Installation-time CPU and memory requirements
Note: The jobs in the following table run one time during the installation process.
Component | CPU | Memory |
---|---|---|
client-registration IAM job |
100 millicores (m) | 128 MB |
security-onboarding IAM job |
20 m | 50 MB |
iam-onboarding IAM job |
20 m | 50 MB |
Runtime CPU and memory requirements
Component | CPU | Memory |
---|---|---|
auth-idp IAM service |
210 m | 660 MB |
auth-pap IAM service |
70 m | 220 MB |
auth-pdp IAM service |
30 m | 50 MB |
secret-watcher IAM service |
10 m | 10 MB |
system-healthcheck-service |
25 m | 32 Mi |
PowerVM environment requirements
The values in the following table apply specifically to PowerVM environments. They do not apply to the Kernel-based virtual machine (KVM) environments or the bare-metal environments.
Requirement | Boot node | Master node | Proxy node | Worker node | Management node | VA node | Etcd node |
---|---|---|---|---|---|---|---|
Number of hosts | 1 | 1, 3, or 5 | 1 or more | 1 or more | 1 or more | 1, 3, or 5 | 1 or more odd number of nodes |
vCPUs | 1 or more | 2 or more | 1 or more | 1 or more | 2 or more |
|
1 or more |
Processor units | 0.5 or more |
|
|
1 or more |
|
|
1 or more |
Recommendations for PowerVM environments:
-
At least four virtual machines (VMs – also called LPARs) are recommended; with the master node, management node, vulnerability advisor node, and worker nodes on separate virtual machines. In high scale environments, the etcd node must also be on a separate VM.
-
Using shared, uncapped processors is recommended to allow for overcommitting CPUs as needed. If dedicated processors are used, you need to follow the guidelines for Cores (vCPUs). For shared processor pool recommendations, see: Shared processors .
-
Large-scale environments can require more vCPUs and processor units. However, try to maintain the vCPU-to-processor unit ratio that is listed in the previous table. For example, if you use eight vCPUs for your management node, use four processor units.
-
For more information about configuring etcd nodes, see Setting the node roles in the hosts file.
Linux on IBM Z and LinuxONE environment requirements
The values in the following table apply specifically to Linux on IBM Z and LinuxONE environments.
Note:
- You must use a separate s390x architecture Linux® LPAR or zKVM guest to build Docker images for your applications.
Requirement | Boot node | Master node | Proxy node | Worker node | Management node |
---|---|---|---|---|---|
Number of hosts | 1 | 1, 3, or 5 | 1 or more | 1 or more | 1 or more |
Cores (IFLs) For more information about IFLs, see Integrated Facility for Linux (IFL) |
1 | 2 | 1 or more | 1 or more | 1 or more |
Number of CPUs | 6 or more | 6 or more | 3 or more | 4 or more | 5 or more |
CPU speed | >= 2.4 GHz | >= 2.4 GHz | >= 2.4 GHz | >= 2.4 GHz | >= 2.4 GHz |
RAM | >=24 GB | >=24 GB | >=16 GB | >=8 GB | >=32 GB |
Free Disk space to install | >=100 GB | >=200 GB | >=150 GB | >=150 GB | >=200 GB |