Installation prerequisites for Db2 pureScale Feature (Intel Linux)
This document applies only to Linux® distribution on hardware based on Intel. Before you install IBM® Db2 pureScale Feature, you must ensure that your system meets the installation prerequisites.
For specific versions of Linux distribution support, refer to the web page listed in the reference.
Ensure that you created your Db2 pureScale Feature installation plan. Your installation plan helps ensure that your system meets the prerequisites and that you performed the pre-installation tasks. The following requirements are described in detail: software prerequisites (including operating system, IBM Spectrum Scale, and Tivoli® SA MP), storage hardware requirements, network prerequisites, and hardware and firmware prerequisites.
Software prerequisites
In Db2 11.5, the Db2 pureScale Feature supports Linux virtual machines.
net-tools-deprecated
is required for
pureScale configurations running on Sles 15.3+.Linux distribution | Required packages | OpenFabrics Enterprise Distribution (OFED) package |
---|---|---|
Red Hat® Enterprise Linux (RHEL) 9.2 |
libstdc++ (both x86_64 and i686)
glibc (both x86_64 and i686) gcc-c++ gcc kernel kernel-devel kernel-headers linux-firmware ntp or chrony ntpdate Python 3.6+ sg3_utils sg3_utils-libs binutils binutils-devel m4 openssh cpp ksh libgcc (both x86_64 and i686) file libgomp make patch perl-Sys-Syslog mksh9 psmisc 9 libibverbs libibverbs-utils librdmacm librdmacm-utils rdma-core ibacm infiniband-diags iwpmd libibumad libpsm2 libpsm2-compat mstflint opa-address-resolution opa-basic-tools opa-fastfabric opa-libopamgt perftest qperf srp_daemon |
For RoCE network type run group installation of "InfiniBand Support" package. |
Red Hat Enterprise Linux (RHEL) 8.8 |
libstdc++ (both x86_64 and i686)
glibc (both x86_64 and i686) gcc-c++ gcc kernel kernel-devel kernel-headers linux-firmware ntp or chrony ntpdate Python 3.6+ sg3_utils sg3_utils-libs binutils binutils-devel m4 openssh cpp ksh libgcc (both x86_64 and i686) file libgomp make patch perl-Sys-Syslog mksh9 psmisc 9 libibverbs libibverbs-utils librdmacm librdmacm-utils rdma-core ibacm infiniband-diags iwpmd libibumad libpsm2 libpsm2-compat mstflint opa-address-resolution opa-basic-tools opa-fastfabric opa-libopamgt perftest qperf srp_daemon |
For RoCE network type run group installation of "InfiniBand Support" package. |
Red Hat Enterprise Linux (RHEL) 8.610 |
libstdc++ (both x86_64 and i686)
glibc (both x86_64 and i686) gcc-c++ gcc kernel kernel-devel kernel-headers linux-firmware ntp or chrony ntpdate Python 3.6+ sg3_utils sg3_utils-libs binutils binutils-devel m4 openssh cpp ksh libgcc (both x86_64 and i686) file libgomp make patch perl-Sys-Syslog mksh9 psmisc 9 libibverbs libibverbs-utils librdmacm librdmacm-utils rdma-core ibacm infiniband-diags iwpmd libibumad libpsm2 libpsm2-compat mstflint opa-address-resolution opa-basic-tools opa-fastfabric opa-libopamgt perftest qperf srp_daemon |
For RoCE network type run group installation of "InfiniBand Support" package. |
Red Hat Enterprise Linux (RHEL) 8.17 |
libibverbs
librdmacm rdma-core dapl ibacm ibutils libstdc++ (both x86_64 and i686) glibc (both x86_64 and i686) gcc-c++ gcc kernel kernel-devel kernel-headers linux-firmware ntp or chrony ntpdate Python 3.6+ sg3_utils sg3_utils-libs binutils binutils-devel m4 openssh cpp ksh libgcc (both x86_64 and i686 file libgomp make patch perl-Sys-Syslog mksh9 psmisc9 |
No OFED package required. Only TCP is supported. |
Red Hat Enterprise Linux (RHEL) 7.96 |
libibverbs
Python
3.6+librdmacm rdma-core dapl ibacm ibutils libstdc++ (both x86_64 and i686) glibc (both x86_64 and i686) gcc-c++ gcc kernel kernel-devel kernel-headers linux-firmware ntp or chrony ntpdate sg3_utils sg3_utils-libs binutils binutils-devel m4 openssh cpp ksh libgcc (both x86_64 and i686 file libgomp make patch perl-Sys-Syslog mksh9 psmisc9 |
For InfiniBand network type or RoCE network type run group installation of "InfiniBand Support" package. |
Red Hat Enterprise Linux (RHEL) 7.85 |
libibverbs
Python
3.6+librdmacm rdma-core dapl ibacm ibutils libstdc++ (both x86_64 and i686) glibc (both x86_64 and i686) gcc-c++ gcc kernel kernel-devel kernel-headers linux-firmware ntp or chrony ntpdate sg3_utils sg3_utils-libs binutils binutils-devel m4 openssh cpp ksh libgcc (both x86_64 and i686 file libgomp make patch perl-Sys-Syslog mksh9 psmisc9 |
For InfiniBand network type or RoCE network type run group installation of "InfiniBand Support" package. |
Red Hat Enterprise Linux (RHEL) 7.61 |
libibverbs
librdmacm rdma-core dapl ibacm ibutils libstdc++ (both x86_64 and i686) glibc (both x86_64 and i686) gcc-c++ gcc kernel kernel-devel kernel-headers linux-firmware ntp or chrony ntpdate sg3_utils sg3_utils-libs binutils binutils-devel m4 openssh cpp ksh libgcc (both x86_64 and i686 file libgomp make patch perl-Sys-Syslog Python 3.6+ |
For InfiniBand network type or RoCE network type run group installation of "InfiniBand Support" package. |
Red Hat Enterprise Linux (RHEL) 7.52 |
libibcm
libibverbs librdmacm rdma-core dapl ibacm ibutils libstdc++ (both x86_64 and i686) glibc (both x86_64 and i686) gcc-c++ gcc kernel kernel-devel kernel-headers linux-firmware ntp or chrony ntpdate sg3_utils sg3_utils-libs binutils binutils-devel m4 openssh cpp ksh libgcc (both x86_64 and i686 file libgomp make patch perl-Sys-Syslog Python 3.6+ |
For InfiniBand network type or RoCE network type run group installation of "InfiniBand Support" package. |
SUSE Linux Enterprise Server (SLES) 15 SP311, 12 |
libibverbs
rdma-core dapl ibacm ibsim ibutils libstdc++* glibc* gcc-c++ gcc kernel-default kernel-devel kernel-firmware kernel-source ntp or chrony net-tools-deprecated sg3_utils binutils openssh cpp ksh-93u mksh libgcc file libgomp1 make patch libdat2-2 dapl-utils infiniband-diags m4 mksh psmisc Python 3.6+ |
|
SUSE Linux Enterprise Server (SLES) 12 SP54, 8 |
libibcm libibverbs rdma-core dapl ibacm ibsim ibutils libipathverbs libstdc++* glibc* gcc-c++ gcc kernel-default kernel-devel kernel-firmware ntp or chrony sg3_utils binutils openssh cpp ksh-93u mksh-50f libgcc file libgomp1 make patch libdat2-2 dapl-utils infiniband-diags m4 mksh9 psmisc9 Python 3.6+ |
OpenFabrics Enterprise Distribution (OFED) package is already bundled within RDMA package in SLES 12 Service Packs. |
SUSE Linux Enterprise Server (SLES) 12 SP48 |
libibcm libibverbs rdma-core dapl ibacm ibsim ibutils libipathverbs libstdc++* glibc* gcc-c++ gcc kernel-default kernel-devel kernel-firmware ntp or chrony sg3_utils binutils openssh cpp ksh-93u mksh-50f libgcc file libgomp1 make patch libdat2-2 dapl-utils infiniband-diags m4 Python 3.6+ |
OpenFabrics Enterprise Distribution (OFED) package is already bundled within RDMA package in SLES 12 Service Packs. |
SUSE Linux Enterprise Server (SLES) 12 SP33 |
libibcm libibverbs rdma-core dapl ibacm ibsim ibutils libipathverbs libstdc++* glibc* gcc-c++ gcc kernel-default kernel-devel kernel-firmware ntp or chrony sg3_utils binutils openssh cpp ksh-93u mksh-50f libgcc file libgomp1 make patch libdat2-2 dapl-utils infiniband-diags m4 Python 3.6+ |
OpenFabrics Enterprise Distribution (OFED) package is already bundled within RDMA package in SLES 12 Service Packs. |
SUSE Linux Enterprise Server (SLES) 15 SP4 |
libibcm libibverbs rdma-core dapl ibacm ibsim ibutils libipathverbs libstdc++* glibc* gcc-c++ gcc kernel-default kernel-devel kernel-firmware ntp or chrony sg3_utils binutils openssh cpp ksh-93u mksh-50f libgcc file libgomp1 make patch libdat2-2 dapl-utils infiniband-diags m4 Python 3.6+ |
|
SUSE Linux Enterprise Server (SLES) 15 SP5 |
libibcm libibverbs rdma-core dapl ibacm ibsim ibutils libipathverbs libstdc++* glibc* gcc-c++ gcc kernel-default kernel-devel kernel-firmware ntp or chrony sg3_utils binutils openssh cpp ksh-93vu mksh-50f libgcc file libgomp1 make patch libdat2-2 dapl-utils infiniband-diags m4 Python 3.6+ |
1 Db2 APAR IT29745 is required when running RHEL 7.6 or higher.
2 On RHEL 7.5 or higher, if a ConnectX-3 card is to be used, then firmware 2.42.5000 or higher must be used on the card.
3 On SLES 12 SP3 or higher, if a ConnectX-3 card is to be used, then firmware 2.40.7004 or higher must be used on the card.
4 When using SLES 12 SP5, the Db2 version must be 11.5.5 or later.
6 When using RHEL 7.9, the Db2 version must be 11.5.6 or later.
7 When using RHEL 8.1, the Db2 version must be 11.5.5 or later. RHEL 8.1 currently supports TCP only (no RDMA).
8 When using a Mellanox ConnectX-4 card for RoCE on SLES, one must use SLES 12 SP4 or higher, and Db2 11.5.5 with APAR IT31924, or later.
9 This package is required starting in Db2 11.5.7 or later.
10 When using RHEL 8.6, the Db2 version must be 11.5.8 or later.
11 When using SLES 15 SP3 or higher, the Db2 version must be 11.5.8 or later. or later.
12 When using a Mellanox ConnectX-5 card for RoCE on SLES, one must use SLES 15 SP3 or higher.
Storage hardware requirements
Recommended free disk space | Minimum required free disk space | |
---|---|---|
Disk to extract installation | 3 GB | 3 GB |
Installation path | 6 GB | 6 GB |
/tmp directory | 5 GB | 2 GB |
/var directory | 5 GB | 2 GB |
/usr directory | 2 GB | 512 MB |
Instance home directory | 5 GB | 1.5 GB1 |
root home directory | 300 MB | 200 MB |
- Instance shared directory: 10 GB1
- Data: dependent on your specific application needs
- Logs: dependent on the expectant number of transactions and the applications logging requirements
Network prerequisites
On a TCP/IP protocol over Ethernet (TCP/IP) network, a Db2 pureScale environment requires only one high-speed network for the Db2 cluster interconnect. Running your Db2 pureScale environment on a TCP/IP network can provide a faster setup for testing the technology. However, for the most demanding write-intensive data sharing workloads, an RDMA protocol over Converged Ethernet (RoCE) network can offer better performance.
InfiniBand (IB) networks and RoCE networks that use RDMA protocol require two networks: one (public) Ethernet network and one (private) high-speed communication network for communication between members and CFs. The high-speed communication network must be an IB network, a RoCE network, or a TCP/IP network. A mixture of these high-speed communication networks is not supported.
It is also a requirement to keep the maximum transmission unit (MTU) size of the network interfaces at the default value of 1500. For more information on configuring the MTU size on Linux, see How do you change the MTU value on the Linux and Windows operating systems?
The rest of this network prerequisites section applies to using RDMA protocol.
Communication adapter type | Switch | IBM Validated Switch | Cabling |
---|---|---|---|
InfiniBand (IB) | QDR IB | Mellanox part number
MIS5030Q-1SFC Mellanox 6036SX (IBM part number: 0724016 or 0724022)
|
QSFP cables |
10-Gigabit Ethernet (10GE) | 10GE |
|
Small Form-factor Pluggable Plus (SFP+) cables |
40-Gigabit Ethernet (40GE) | 40GE |
|
QSFP cables |
100-Gigabit Ethernet (100GE) | 100GE | Cisco Nexus C9336C-FX2 | QSFP28 cables |
- Db2 pureScale environments with Linux systems and InfiniBand communication adapter require FabricIT EFM switch based fabric management software. For communication adapter port support on CF servers, the minimum required fabric manager software image that must be installed on the switch is image-PPC_M405EX-EFM_1.1.2500.img. The switch might not support a direct upgrade path to the minimum version, in which case multiple upgrades are required. For instructions on upgrading the fabric manager software on a specific Mellanox switch, see the Mellanox website. Enabling subnet manager (SM) on the switch is mandatory for InfiniBand networks. To create a Db2 pureScale environment with multiple switches, you must have communication adapter on CF servers and configure switch failover on the switches. To support switch failover, see the Mellanox website for instructions on setting up the subnet manager for a high availability domain.
- Cable
considerations:
- On InfiniBand networks: The QSFP 4 x 4 QDR cables are used to connect hosts to the switch, and inter-switch links. If you are using two switches, two or more inter-switch links are required. The maximum number of inter-switch links required can be determined by using half of the total communication adapter ports that are connected from CFs and members to the switches. For example, in a two switch Db2 pureScale environment where the primary and secondary CF each have four communication adapter ports, and there are four members, the maximum number of inter-switch links required is 6 (6 = (2 * 4 + 4) / 2).
- On a RoCE network, the maximum number of ISLs can be further limited by the number of ports that are supported by the Link Aggregate Communication Protocol (LACP). This setup is required for switch failover. As this value can differ in different switch vendors, refer to the switch manual for any such limitation. For example, the Blade Network Technologies G8124 24-port switch, with Blade OS 6.3.2.0, has a limitation of a maximum of eight ports in each LACP trunk between the two switches. This effectively caps the maximum of ISLs to four (four ports on each switch).
- For the configuration and the features that are required to be enabled and disabled for switch support on RoCE networks, see Configuring switch failover for a Db2 pureScale environment on a RoCE network (Linux). IEEE 802.3x global pause flow control is required. Any Ethernet switch that supports the listed configuration and features is supported. The exact setup instructions might differ from what is documented in the switch section, which is based on the IBM validated switches. Refer to the switch user manual for details.
Communication adapter type | Switch | Cabling |
---|---|---|
InfiniBand (IB) | Voltaire 40 Gb InfiniBand Switch1, for example part number 46M6005 | QSFP cables 2 |
10-Gigabit Ethernet (10GE) | BNT Virtual Fabric 10 Gb Switch Module for IBM BladeCenter, for example part number 46C7191 |
1 To create a Db2 pureScale environment with multiple switches, set up a communication adapter for the CF hosts.
2 On InfiniBand networks: The QSFP 4 x 4 QDR cables are used to connect hosts to the switch, and for inter-switch links, too. If you are using two switches, two or more inter-switch links are required. The maximum number of inter-switch links required can be determined by using half of the total communication adapter ports that are connected from CFs and members to the switches. For example, in a two switch Db2 pureScale environment where the primary and secondary CF each have four communication adapter ports, and there are four members, the maximum number of inter-switch links required is 6 (6 = (2 * 4 + 4) / 2). On a 10GE network, the maximum number of ISLs can be further limited by the number of ports that are supported by the Link Aggregate Communication Protocol (LACP). This setup is required for switch failover. As this value can differ in different switch vendors, refer to the switch manual for any such limitation. For example, the Blade Network Technologies G8124 24-port switch, with Blade OS 6.3.2.0, has a limitation of maximum eight ports in each LACP trunk between the two switches. This effectively caps the maximum of ISLs to four (four ports on each switch).
Hardware and firmware prerequisites
For TCP/IP architecture, the Db2 pureScale Feature is supported on any rack-mounted server or blade server. The Db2 pureScale Feature is supported on any rack-mounted server or blade server.
- Mellanox ConnectX-2 generation card that supports RDMA over converged Ethernet (RoCE) or InfiniBand
- Mellanox ConnectX-3 generation card that supports RDMA over converged Ethernet (RoCE) or InfiniBand
- Mellanox ConnectX-4 generation card that supports RDMA over converged Ethernet (RoCE) or Infiniband
- Mellanox ConnectX-5 generation card that supports RDMA over converged Ethernet (RoCE) (RHEL 7.7 and later, and SLES 15 SP3 and later)
- Mellanox ConnectX-6 generation card that supports RDMA over converged Ethernet (RoCE) (RHEL only)
- Mellanox ConnectX-2 Dual Port 10 GbE Adapter for Lenovo x-Series (81Y9990)
- Mellanox ConnectX-2 Dual-port QSFP QDR IB Adapter for Lenovo x-Series (95Y3750)
- Mellanox ConnectX-3 FDR VPI IB/E Adapter for Lenovo x-Series (00D9550)
- Mellanox ConnectX-3 10 GbE Adapter for Lenovo x-Series (00D9690)
- Mellanox ConnectX-4 40GbE Adapter for Lenovo x-Series (00YK367)
- Mellanox ConnectX-6 Dx 100GbE QSFP56 2-port PCIe 4 Ethernet Adapter (01PE649)
- Mellanox ConnectX-5 100Gb Adapter (MCX556A-ECAT)
- Mellanox ConnectX-5 100Gb Adapter (MCX516A-CCAT)
Server | 10-Gigabit Ethernet (10GE) adapter | Minimum 10GE network adapter firmware version | InfiniBand (IB) Host Channel Adapter (HCA) | Minimum IB HCA firmware version |
---|---|---|---|---|
BladeCenter HS22 System x blades | Mellanox 2-port 10 Gb Ethernet Expansion Card with RoCE, for example part number 90Y3570 | 2.9.1000 | 2-port 40 Gb InfiniBand Card (CFFh), for example part number 46M6001 | 2.9.1000 |
BladeCenter HS23 System x blades | Mellanox 2-port 10 Gb Ethernet Expansion Card (CFFh) with RoCE, part number 90Y3570 | 2.9.1000 | 2-port 40 Gb InfiniBand Expansion Card (CFFh) - part number 46M6001 | 2.9.1000 |
KVM Virtual Machine | Mellanox ConnectX-2 EN 10 Gb Ethernet Adapters with RoCE | 2.9.1200 | Not supported | N/A |
LenovoFlex
System X 240 Compute Node
Lenovo Flex System X 440 Compute Node
|
IBM Flex System® EN4132 2-port 10 Gb RoCE Adapter | 2.10.2324 + uEFI Fix 4.0.320 | Not supported | N/A |
- Install the latest supported firmware for your System x server from http://www.ibm.com/support/us/en/.
- KVM-hosted environments for a Db2 pureScale Feature are supported on rack-mounted servers only.
- Availability of specific hardware or firmware can vary over time and region. Check availability with your supplier.