Installing the Pacemaker cluster software stack

Pacemaker is an open-source, high availability cluster manager software. To ensure a proper installation, follow the procedures provided.

Important: The information in this topic, including the pre-setup checklist and procedure, is only meant for version 11.5.5 and earlier. Starting version 11.5.6, users can install Pacemaker using the Db2® installer. For more information, see Installing Pacemaker using the Db2 installer.
Important: In Db2 11.5.8 and later, Mutual Failover high availability is supported when using Pacemaker as the integrated cluster manager. In Db2 11.5.6 and later, the Pacemaker cluster manager for automated fail-over to HADR standby databases is packaged and installed with Db2. In Db2 11.5.5, Pacemaker is included and available for production environments. In Db2 11.5.4, Pacemaker is included as a technology preview only, for development, test, and proof-of-concept environments.

Before you begin

Ensure that you have the Pacemaker cluster software package that is intended for use with Db2 by downloading the package from the IBM Marketing Registration Services site. Before proceeding to the next section, verify that all prerequisites and necessary criteria have been met. For more information on these prerequisites, refer to Prerequisites for an integrated solution using Pacemaker.

Pre-setup checklist

  • Instance user ID and group ID are set up.

  • The /etc/hosts file includes both hosts, following the format listed in Host file setup.

  • Both hosts have TCP/IP connectivity between their Ethernet network interfaces.

  • Both the root and instance user IDs can use ssh between the two hosts, using both long and short host names.

  • The Pacemaker cluster software has been downloaded to both hosts.

  • Ensure that no non-Db2 provided Pacemaker components are installed.

  • Ensure system repositories are enabled for pacemaker package dependencies.
The following is an example of how the pre-setup checklist works. An AWS environment is used for this example:
  1. Hosts information
    Table 1. Example hosts information
    Hostname IP address of the eth0 device

    Short: ip-172-31-15-79

    Long: ip-172-31-15-79.us-east-2.compute.internal

    172.31.15.79

    Short: ip-172-31-10-145

    Long: ip-72-31-10-145.us-east-2.compute.internal

    172.31.10.145
  2. /etc/hosts are set up in both hosts:
    127.0.0.1 localhost localhost.localdomain 
    localhost4 localhost4.localdomain4 
    ::1 localhost localhost.localdomain 
    localhost6 localhost6.localdomain6 
    
    172.31.15.79 ip-172-31-15-79.us-east-2.compute.internal ip-172-31-15-79 
    172.31.10.145 ip-172-31-10-145.us-east-2.compute.internal ip-172-31-10-145
    127.0.0.1 localhost localhost.localdomain 
    localhost4 localhost4.localdomain4 
    ::1 localhost localhost.localdomain 
    localhost6 localhost6.localdomain6 
    
    172.31.15.79 ip-172-31-15-79.us-east-2.compute.internal ip-172-31-15-79 
    172.31.10.145 ip-172-31-10-145.us-east-2.compute.internal ip-172-31-10-145
  3. TCP/IP ping can be performed between the two hosts:
    • On ip-172-31-15-79, ping -I 172.31.15.79 172.31.10.145 works.
    • On ip-172-31-10-145, ping -I 172.31.10.145 172.31.15.79 works.
  4. Instance user ID / group ID: db2inst1 / db2iadm1
    The following command can be used to generate them:
    groupadd db2iadm1
    useradd -g db2iadm1 -m -d /home/db2inst1 db2inst1
    Verify that they have been created. To do that, try logging in with the user and password provided:
    [root@ip-172-31-15-79 server]# grep db2iadm1 /etc/group
    db2iadm1:x:1001:
    
    [root@ip-172-31-15-79 ec2-user]# grep db2inst1 /etc/passwd
    db2inst1:x:1001:1001::/home/db2inst1:/bin/bash
  5. SSH access works with both long and short host names between the two hosts using the root and instance user ID.
    As root user on ip-172-31-15-79, the following commands can run successfully:
    ssh ip-172-31-15-79 -l root ls
    ssh ip-172-31-15-79.us-east-2.compute.internal -l root ls
    ssh ip-172-31-10-145 -l root ls
    ssh ip-172-31-10-145.us-east-2.compute.internal -l root ls
    As root user on ip-172-31-10-145, the following commands can run successfully:
    ssh ip-172-31-15-79 -l root ls
    ssh ip-172-31-15-79.us-east-2.compute.internal -l root ls
    ssh ip-172-31-10-145 -l root ls
    ssh ip-172-31-10-145.us-east-2.compute.internal -l root ls
  6. Ensure that no non-Db2 provided Pacemaker components are installed by running the following:
    # rpm -q corosync
    package corosync is not installed
    
    # rpm -q pacemaker
    package pacemaker is not installed
    
    # rpm -q crmsh
    package crmsh is not installed
    
    # rpm -q cluster-glue
    package cluster-glue is not installed
  7. Location of the Db2 software: directory /root/db2Image exists on both hosts.
  8. The Pacemaker cluster software package tar file exists in /tmp on both hosts.
    Each tarball follows the naming convention:
    • Db2 version
    • Pacemaker
    • Date in YYYYMMDD format
    • Linux Distribution
    • Linux architecture
    For example: Db2_v11.5.4.0_Pacemaker_20200418_RHEL8.1_x86_64.tar.gz
    [root@ip-172-31-15-79 tmp]# ls -al *.gz
    -rw-r--r--. 1 ec2-user ec2-user 10710023 Dec 24 21:32 
    Db2_v11.5.4.0_Pacemaker_20200418_RHEL8.1_x86_64.tar.gz
    
    [root@ip-172-31-10-145 tmp]# ls -al *.gz
    -rw-r--r--. 1 ec2-user ec2-user 10710023 Dec 24 21:32
    Db2_v11.5.4.0_Pacemaker_20200418_RHEL8.1_x86_64.tar.gz 

About this task

Follow the procedure to install the Pacemaker cluster software stack.

Procedure

  1. As root on the first host, ip-172-31-15-79, extract the tar file in the /tmp folder.
    • cd /tmp

    • tar -zxf Db2_v11.5.4.0_Pacemaker_20200418_RHEL8.1_x86_64.tar.gz

    • The above will create the directory Db2_v11.5.4.0_Pacemaker_20200418_RHEL8.1_x86_64

      Which will contain the following directory tree:
      Db2/
      Db2agents/
      RPMS/
      RPMS/<architecture>
      RPMS/noarch
      SRPMS/ 
      Note: The <architecture> variable will be different based on your hardware. On Intel/AMD it will be x86_64. On POWER LE it will be ppcle. For z-systems it will be s390x.
  2. For RHEL 8.1, install the epel-release, followed by the RPMs in the untarred Pacemaker directory:
    1. cd /tmp/Db2_v11.5.4.0_Pacemaker_20200418_RHEL8.1_x86_64/RPMS
    2. dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
    3. dnf install */*.rpm
    A sample output:
    [root@ip-172-31-10-145 RPMS]# dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
    Red Hat Update Infrastructure 3 Client Configuration 2.0 kB/s | 2.1 kB     00:01    
    Red Hat Enterprise Linux 8 for x86_64 - AppStream fr  30 kB/s | 2.8 kB     00:00    
    Red Hat Enterprise Linux 8 for x86_64 - BaseOS from   26 kB/s | 2.4 kB     00:00    
    epel-release-latest-8.noarch.rpm                      16 kB/s |  21 kB     00:01    
    Dependencies resolved.
    =====================================================================================
     Package               Arch            Version           Repository             Size
    =====================================================================================
    Installing:
     epel-release          noarch          8-7.el8           @commandline           21 k
    
    Transaction Summary
    =====================================================================================
    Install  1 Package
    
    Total size: 21 k
    Installed size: 30 k
    Is this ok [y/N]: y
    Downloading Packages:
    Running transaction check
    Transaction check succeeded.
    Running transaction test
    Transaction test succeeded.
    Running transaction
      Preparing        :                                                             1/1 
      Installing       : epel-release-8-7.el8.noarch                                 1/1 
      Running scriptlet: epel-release-8-7.el8.noarch                                 1/1 
      Verifying        : epel-release-8-7.el8.noarch                                 1/1 
    
    Installed:
      epel-release-8-7.el8.noarch                                                        
    
    Complete!
    
    
    [root@ip-172-31-10-145 RPMS]# dnf install */*.rpm
    Extra Packages for Ente     [===                   ] ---  B/s |   0  B     --:-- ETA
    Extra Packages for Ente     [===                   ] ---  B/s |   0  B     --:-- ETA
    Extra Packages for Ente     [   ===                ] ---  B/s |   0  B     --:-- ETA
    Extra Packages for Ente 74% [================      ]  11 MB/s | 3.2 MB     00:00 ETA
    Extra Packages for Enterprise Linux 8 - x86_64       1.8 MB/s | 4.3 MB     00:02
        
    Last metadata expiration check: 0:00:01 ago on Tue 24 Dec 2019 08:43:32 PM UTC.
    Dependencies resolved.
    =====================================================================================
     Package               Arch   Version          Repository                       Size
    =====================================================================================
    Installing:
     crmsh                 noarch 4.1.0-0          @commandline                     717 k
     mcrmsh-scripts        noarch 4.1.0-0          @commandline                     30 k
     pacemaker-cts         noarch 2.0.2-1.el8      @commandline                     2.0 M
    
    .
    .
    .
    
    Installed:
      crmsh-4.1.0-0.noarch                                                               
      crmsh-scripts-4.1.0-0.noarch                                                       
      pacemaker-cts-2.0.2-1.el8.noarch                                                   
      pacemaker-doc-2.0.2-1.el8.noarch                                                   
      pacemaker-nagios-plugins-metadata-2.0.2-1.el8.noarch                               
      pacemaker-schemas-2.0.2-1.el8.noarch                                               
      python3-parallax-1.0.5-1.el8.noarch                                                
      corosync-3.0.3-1.el8.x86_64                                                        
      corosync-debuginfo-3.0.3-1.el8.x86_64                                              
      corosync-debugsource-3.0.3-1.el8.x86_64                                            
      corosynclib-3.0.3-1.el8.x86_64                                                     
      corosynclib-debuginfo-3.0.3-1.el8.x86_64                                           
      corosynclib-devel-3.0.3-1.el8.x86_64                                               
      corosync-vqsim-3.0.3-1.el8.x86_64                                                  
      corosync-vqsim-debuginfo-3.0.3-1.el8.x86_64                                        
      kronosnet-debugsource-1.13-1.el8.x86_64                                            
      ldirectord-4.4.0-1.el8.x86_64                                                      
      libknet1-1.13-1.el8.x86_64                            
      .
      .
      .
      libaio-0.3.112-1.el8.x86_64                                                        
      gssproxy-0.8.0-14.el8.x86_64                                                       
      libqb-1.0.3-10.el8.x86_64                                                          
      device-mapper-persistent-data-0.8.5-2.el8.x86_64                                   
      libqb-devel-1.0.3-10.el8.x86_64                                                    
      rpcbind-1.2.5-4.el8.x86_64                                                         
    
    Complete!
    For SLES 15 SP1, add the backports repository, followed by the RPMs in the untarred Pacemaker directory:
    1. cd /tmp/Db2_v11.5.4.0_Pacemaker_20200418_SLES15SP1_x86_641/RPMS
    2. zypper addrepo -fc http://download.opensuse.org/repositories/openSUSE:/Backports:/SLE-15-SP1/standard/openSUSE:Backports:SLE-15-SP1.repo
    3. zypper install --allow-unsigned-rpm {noarch,x86_64}/*.rpm
  3. Verify that the following packages are installed. The output may vary slightly for different architectures and Linux distributions. All packages should include the db2pcmk text in the output. For example:
    [root@ip-172-31-15-79 RPMS]# rpm -q corosync 
    corosync-3.0.3-1.db2pcmk.el8.x86_64 
    [root@ip-172-31-15-79 RPMS]# rpm -q pacemaker 
    pacemaker-2.0.2-1.db2pcmk.el8.x86_64
    [root@ip-172-31-15-79 RPMS]# rpm -q crmsh 
    crmsh-4.1.0-0.db2pcmk.el8.noarch
  4. Copy the db2cm utility from the cluster software directory into the instance sqllib/bin directory:
    cp /tmp/Db2_v11.5.4.0_Pacemaker_20200418_RHEL8.1_x86_64/Db2/db2cm /home/db2inst1/sqllib/bin
    chmod 755 /home/db2inst1/sqllib/bin/db2cm
  5. Copy the resource agent scripts (db2hadr, db2inst, db2ethmon) from /tmp/Db2agents into /usr/lib/ocf/resource.d/heartbeat/ on both hosts:
    /home/db2inst1/sqllib/bin/db2cm -copy_resources /tmp/Db2_v11.5.4.0_Pacemaker_20200418_RHEL8.1_x86_64/Db2agents -host ip-172-31-10-145
    /home/db2inst1/sqllib/bin/db2cm -copy_resources /tmp/Db2_v11.5.4.0_Pacemaker_20200418_RHEL8.1_x86_64/Db2agents -host ip-172-31-15-79
  6. Repeat Steps 1 to 4 on the second host.

What to do next

Proceed to Configuring a clustered environment using the Db2 cluster manager (db2cm) utility to create the Pacemaker cluster and resources.