Hardware and software requirements for z/VM® system

The following information provides a consolidated view of the hardware and software requirements for your IBM® Cloud Infrastructure Center environment for z/VM system.

z/VM system

To manage IBM Z® or LinuxONE resources that are virtualized by z/VM, the IBM® Cloud Infrastructure Center has the following requirements:

  • z/VM version 7.2 or 7.3. See the z/VM Installation Guide.

    Note: The information that is provided by this document assumes DIRMAINT and RACF® are used. If you have tools other than DIRMAINT and RACF, reach out to your vendor to find corresponding required configurations.

  • It is recommended to update z/VM to the latest level with APARs installed as listed on the z/VM service information page

  • You must install all the z/VM SMAPI and z/VM DIRMAINT APARs on corresponding z/VM versions.

  • For DIRMAINT performance, it's recommended to refer to DIRMAINT Performance to do optimization.

  • If you are running on z/VM 7.2, make sure you installed z/VM APAR VM66173 .

  • If you are running on z/VM 7.2 with mixed CP and IFL configured on the LPAR, make sure you installed z/VM APAR VM66568 .

Management node

Supported operating systems:

  • RHEL 8.6

  • RHEL 8.8

Dependent management node rpms

Minimum server requirements:

  • Memory: 16 GB

  • IFL: ~ 0.5

  • Disk: 40 GB

Note: 40GB of disk space is minimum requirement on management node. Image files and backup data files will be stored on management server, so if you have multiple images to use and have plans for backup operation, we suggest to plan extra disk space for image and backup locations. You can add the size of all the usages you will use in the future and make sure there is space left for them. We suggest to provide sufficient disk space when planning the management node.

Note: Around 0.5 IFL are consumed to run an IBM Cloud Infrastructure Center management node for z/VM.

Note: For the standalone IBM Cloud Infrastructure Center, it's recommended to consider using some external storage such as nfs or other shared storage to mount to the following folder in order to increase the disk size easier when needed:

Folder Description
/var/lib/glance/images The default folder to contain imported or created images can be very large if there are multiple images in the IBM Cloud Infrastructure Center. Set the owner of /var/lib/glance/images to be glance:glance if a shared remote mounted directory is used.

Note: For the IBM Cloud Infrastructure Center multi-node cluster, when Swift is used, the Glance images, exported images, and volume backups are stored to Swift Object Storage. By default, a 20G sparse file is created and mounted to Swift storage path /srv/node/partition1. It uses the file system available space. So LVM is recommended to set as Swift storage device. You can refer to the guide to config LVM as Swift storage device.

Important: If on the management node, the root disk runs out of space, IBM Cloud Infrastructure Center services may run into unexpected behavior, services may be unavailable, and may cause database inconsistency or loss of data. Also management node's hostname can't be changed once IBM Cloud Infrastructure Center installation is complete, otherwise service may be unavailable or operation may timeout.

Compute node

Supported operating systems:

  • RHEL 8.6

  • RHEL 8.8

Dependent compute node rpms

Minimum server requirements:

  • Memory: 8 GB

  • IFL: ~ 0.2

  • Disk: 80 GB

Note: 80GB of disk space is the minimum requirement on compute node. The IBM Cloud Infrastructure Center caches image files on each compute node. If you have the requirement of using multiple images at same time, we suggest planning extra disk space for cached image files. In the future, it is possible to add to the size of all the usages to ensure there is space left for them. We also suggest to provide sufficient disk space when planning the compute node.

Note: Around 0.2 IFL are consumed to run an IBM Cloud Infrastructure Center compute node for z/VM as thei minimum requirement. It might be increased when running a certain workload in a certain amount of time (for example, multiple virtual machine deployment in parallel)

Note: If you plan to turn on the monitoring data collection, need to consider the extra disk space for storing the monitoring data, please refer to Planning for monitoring data storage for further information.

Note: It's recommended to consider using some external storage such as nfs or other shared storage to mount to the following folders in order to increase the disk size easier if needed.

Folder Description
/var/lib/zvmsdk The folder contains image cache downloaded from management node and imported into compute node, it can be very big if there are multiple images in the compute node.
/var/opt/ibm/icic/image-backups For the standalone IBM Cloud Infrastructure Center, if the compute node is the agent node of the storage provider, it is the target folder for exporting volume backend or snapshot backend images that are used to boot virtual machine from volume.
/var/lib/cinder If the compute node is the agent node of the storage provider, the folder stores image files when booting virtual machines from volumes. To support booting multiple virtual machines from volumes concurrently, use /var/lib/cinder/conversion/ to store the downloaded image before copying it to the volume. For more details, please refer to related trouble shooting. If a shared remote mounted directory is used, set the owner of /var/opt/ibm/icic/image-backups and /var/lib/cinder to be cinder:cinder.

Note: If on the compute node, the root disk runs out of space, IBM Cloud Infrastructure Center services may run into unexpected behavior, services may not available, and may cause database inconsistency or loss of data.

Note: Keep root user from sudo group (eg, from /etc/sudoers) of your compute nodes and management node, because sudo is used in many scripts of IBM Cloud Infrastructure Center.

z/VM dasd group

The dasd group (either ECKD or FBA type) is needed if you want to use the z/VM® dasd group as the root or data disks for the virtual machines. If you use persistent storage as the root or data disks for the virtual machines, the dasd group is not required. Currently, the maximum of one dasd group is allowed for one z/VM compute node.

Note: For RHCOS4 DASD images , only ECKD dasd group is supported.

z/VM VSWITCH

At least one layer2 VSWITCH is required by the IBM Cloud Infrastructure Center