Hardware requirements and recommendations
Review the minimum CPU, Memory, RAM, and disk space requirements for setting up and running IBM® Cloud Private clusters.
Note: Ensure that you review and verify that you meet the increased memory requirements. For more information, see the Hardware requirements section.
The following tables list the minimum system requirements per node for running IBM Cloud Private. The minimum requirement for IBM Cloud Private is one master (and proxy) node, one management node, and one worker node.
Hardware requirements
Single node requirements
Requirement | All management services enabled | All management services including logging disabled |
---|---|---|
Number of hosts | 1 | 1 |
Cores | 8 or more | 8 or more |
CPU | >=2.4 GHz | >=2.4 GHz |
RAM | 32 GB or more | 16 GB or more |
Free disk space to install | >=200 GB | >=150 GB |
Note for CPUs:
- For a Linux® x86_64 cluster, use a CPU that supports SSE 4.2.
- For a Linux® on Power® (ppc64le) cluster, use a CPU that is version Power8 or higher.
- For a Linux® on IBM® Z and LinuxONE cluster, use a CPU that is either version EC12 or later or any LinuxONE system.
Multi-node requirements
Note: In your multi-node cluster, if you do not use a management node, ensure that the master node meets the requirements of the management node plus the master node.
Requirement | Boot node | Master node | Proxy node | Worker node | Management node | VA node | etcd node |
---|---|---|---|---|---|---|---|
Number of hosts | 1 | 1, 3, or 5 | 1 or more | 1 or more | 1 or more | 1, 3, or 5 | 1 or more odd number of nodes |
Cores | 1 or more | 8 or more | 2 or more | 2 or more | 8 or more |
|
1 or more |
CPU | >= 2.4 GHz | >= 2.4 GHz | >= 2.4 GHz | >= 2.4 GHz | >= 2.4 GHz | >= 2.4 GHz | >= 2.4 GHz |
RAM | >=4 GB | >=16 GB | >=4 GB | >=4 GB | >=16 GB |
|
>=4 GB |
Free disk space to install | >=100 GB | >=300 GB | >=150 GB | >=150 GB | >=300 GB |
|
>=100 GB |
Notes:
-
For CPUs:
- For a Linux x86_64 cluster, use a CPU that supports SSE 4.2.
- For a Linux on Power (ppc64le) cluster, use a CPU that is version Power8 or higher.
- For a Linux® on IBM® Z and LinuxONE cluster, use a CPU that is either version EC12 or later or any LinuxONE system.
-
A virtual processor core (VPC) is a unit of measurement that is used to determine the licensing cost of IBM products. It is based on the number of virtual cores (vCPUs) that are available to the product. A vCPU is a virtual core that is assigned to a virtual machine or a physical processor core if the server isn’t partitioned for virtual machines. A vCPU is equivalent to a Kubernetes CPU. For more details, see Kubernetes Meaning of CPU .
-
If you disable logging and/or monitoring during installation, you can save some RAM and CPU. If you want to enable logging and/or monitoring, see the sample deployment sizes in Sizing your cluster.
Disk space requirements
Location | Minimum disk space | Node |
---|---|---|
/ |
800 MB | All nodes Note: The / directory is used for storage config files.Note: The minimum disk space of / is 800 MB only if /var , /opt , /tmp are separated partitions and the offline packages are not under the / partition. |
/var |
240 GB | Master and management nodes |
/var/lib/docker |
>=100 GB | All nodes |
/var/lib/etcd |
>=10 GB | Master and etcd nodes Note: If the etcd node is separated, this directory is on the etcd node. |
/var/lib/icp |
>=100 GB | Master, management, and VA nodes |
/var/lib/mysql |
>=10 GB | Master |
/var/lib/registry |
>=10 GB | Master Note: The directory must be at least 50 GB if the cluster is a mix cluster. Must be large enough to host all the Docker images that you plan to store in your private image registry. |
/var/lib/kubelet |
>=10 GB | All nodes need >10 GB of disk space If you enable the Vulnerability Advisor, the VA node needs >=100 GB of disk space. |
/tmp |
50 GB | Staging directory for installation files on all the nodes. |
Installation directory | 50 GB | Boot node Note: The directory must have at least 50 GB of available disk space for the installation and installation files. |
Important: The /var
directory is the default storage location for most Docker images and the containers that are used in your IBM Cloud Private cluster. Other directories that are used by the installer but do not require
significant amounts of disk space includes the following directories:
/etc/cfc
- this directory stores IBM Cloud Private configuration and certification key file./opt/ibm/cfc
- this directory stores the IBM Cloud Private license files.
To prevent disk space issues, mount the default storage directories on separate paths that have larger disk capacities. For more information about mounting the Docker storage directory (/var/lib/docker
), see Specifying a default Docker storage directory by using bind mount.
You can also use this bind method to mount the other IBM Cloud Private default storage directories. To prevent disk space issues in your cluster, you might want to use a bind mount to mount the following directories:
- Etcd -
/var/lib/etcd
- VA -
/var/lib/icp
- Kubelet service -
/var/lib/kubelet
Note: For offline installation, the installation directory must have at least 50 GB of available disk space for the installation and installation files.
For more information about mounting the default storage directories, see Specifying other default storage directories by using bind mount.
PowerVM environment requirements
The values in the following table apply specifically to PowerVM environments. They do not apply to the Kernel-based Virtual Machine (KVM) environments or the bare-metal environments.
Requirement | Boot node | Master node | Proxy node | Worker node | Management node | VA node | etcd node |
---|---|---|---|---|---|---|---|
Number of hosts | 1 | 1, 3, or 5 | 1 or more | 1 or more | 1 or more | 1, 3, or 5 | 1 or more odd number of nodes |
vCPUs | 1 or more | 2 or more | 1 or more | 1 or more | 2 or more |
|
1 or more |
Processor Units | 0.5 or more |
|
|
1 or more |
|
|
1 or more |
Recommendations for PowerVM environments:
-
At least four virtual machines (VMs – also called LPARs) are recommended; with the master node, management node, vulnerability advisor node, and worker nodes on separate virtual machines. In high scale environments, the etcd node must also be on a separate VM.
-
Using shared, uncapped processors is recommended to allow for overcommitting CPUs as needed. If dedicated processors are used, you need to follow the guidelines for Cores (vCPUs). For shared processor pool recommendations, see: Shared processors .
-
In a large-scale environment, you will likely need more vCPUs and processor units. However, you should maintain the vCPU-to-processor unit ratio that is listed in the previous table. For example, by using eight vCPUs for your management node requires four processor units.
-
See Setting the node roles in the hosts file for more information about configuring etcd nodes.