Upgrading PTS in cluster environment

Deployment options: Netezza Performance Server for Cloud Pak for Data System

Learn how to install and or upgrade, and configure the PTS software in cluster PTS. The installation must be done by the root user.

About this task

As this environment is an HA environment, two mounts (/opt/ibm (PTS-Software) and /var/nzrepl) are part of the HA modes. Device mapper naming and file system type are obtained by running the df -kh/mount commands.

For the PTS HA environment, you have two physical machines; each with their own host. For example, HA1 / HA2, which means that there are 4 PTS boxes in total.

Important: Never mount the software or data volume on two hosts simultaneously. File system damage and or a PTS database crash might occur if both hosts update the same blocks on a disk. Before proceeding, take outputs of df -kh, cat /proc/mount, and cat /etc/fstab from all PTS nodes.

This procedure is meant for RHEL 6.X. and RHEL 7.X cluster commands. Mounting can be done by using the mount -a command. Before that you need to add the /var/nzrepl and /opt/ibm entries from the output of the cat /proc/mount command. When they are added, you must remove those entries from /etc/fstab before you can upgrade the system. Other commands on RHEL 7.X are pcs status, pcs cluster. Do not down the cluster. Instead, disable the PTS service and resources.

Procedure

  1. Log in as root on the replication log server host.
  2. Run the command.
    export PATH=/opt/ibm/nzpts/bin/:$PATH
  3. Copy the PTS software installation files, pts.tar and installpts, to the replication log server.
    mkdir /tmp/pts
    scp nz@NPS_Server:/nz/kit/pts/* nz@pts_server:/tmp/pts
    cd /tmp/pts
  4. Change directories to the destination directory where you SCPed the files.
    cd /tmp/pts
  5. Identify the active PTS node.
    • For RHEL 6.x, run the clustat command.
      clustat
      Example:
      [PRO] [root@pf3il0504 pts]# clustat
       Member Name                   ID   Status
       ------ ----                  	      ---- ------
       pf3il0504                            1   Online, Local, rgmanager
       pf3il0505                            2   Online, rgmanager
       /dev/block/253:15               0   Online, Quorum Disk
      
       Service Name                    Owner (Last)           State
         ------- ----                          ----- ------              -----
       service:PTS-Service                pf3il0504             started
      [PRO] [root@pf3il0504 pts]
    • For RHEL 7.x, run the pcs resource command.
      pcs resource
      Example:
      root@nptprod2 pts]# pcs resource
       Resource Group: PTS-Service
           vgIBMrs    (ocf::heartbeat:LVM):   Started prodpt2
           vgREPDATArs        (ocf::heartbeat:LVM):   Started prodpt2
           ptsDataMount       (ocf::heartbeat:Filesystem):    Started prodpt2
           ptsSoftwareMount   (ocf::heartbeat:Filesystem):    Started prodpt2
           ptsSoftwareDatabase        (systemd:ptsdbd):       Started prodpt2
           ptsSoftware        (systemd:ptsd): Started prodpt2
           publicVirtualIP    (ocf::heartbeat:IPaddr2):       Started prodpt2
           nfsDaemon  (ocf::heartbeat:nfsserver):     Started prodpt2
           nfsNotify  (ocf::heartbeat:nfsnotify):     Started prodpt2
           Public-Link        (ocf::heartbeat:ethmonitor):    Started prodpt2
      [root@nptprod2 pts]#
      
  6. As the nz user:
    1. From the active PTS node, capture the existing ptsexportsetup details.
      /opt/ibm/nzpts/bin/ptsexportsetup /tmp/pts_export_nodename.sh 
      Replace nodename with system name or any unique name.
    2. Run the following command only from both active nodes of the primary and subordinate PTS log server to stop PTS replication:
      ptsreplication -stop -all
      Example:
      [nz@nptprod2 ~]$ ptsreplication -stop -all
      1.      Configured node ptcprod.abc.root.beta.rg:52573 to stop replication.
      2.      Configured node ptdplc.abc.root.beta.rg:52573 to stop replication.
      ptsreplication complete
  7. From the active node, check the PTS-Service status.
    • For RHEL 6.X, run the following command.
      clustat
    • For RHEL 7.X, run the following commands.
      • pcs resource
      • pcs cluster status
      • pcs status
  8. From the active node, disable the PTS-Service.
    • For RHEL 6.X, run the following command.
      clusvcadm -d PTS-Service 
    • For RHEL 7.X, run the following command.
      pcs resource disable PTS-Service
  9. From the active node, do the following.
    1. Ensure that the PTS-Server status is disabled for RHEL 6.X.
      clustat
      Example:
      clustat
      service:PTS-Service <PTS'N'> disabled
    2. Ensure that the PTS-Server resources are stopped for RHEL 7.X:
      pcs resource
      Example:
      [root@nptprod2 pts]# pcs resource
       Resource Group: PTS-Service
           vgIBMrs    (ocf::heartbeat:LVM):   Started prodpt2 (disabled)
           vgREPDATArs        (ocf::heartbeat:LVM):   Stopping prodpt2 (disabled)
           ptsDataMount       (ocf::heartbeat:Filesystem):    Stopped (disabled)
           ptsSoftwareMount   (ocf::heartbeat:Filesystem):    Stopped (disabled)
           ptsSoftwareDatabase        (systemd:ptsdbd):       Stopped (disabled)
           ptsSoftware        (systemd:ptsd): Stopped (disabled)
           publicVirtualIP    (ocf::heartbeat:IPaddr2):       Stopped (disabled)
           nfsDaemon  (ocf::heartbeat:nfsserver):     Stopped (disabled)
           nfsNotify  (ocf::heartbeat:nfsnotify):     Stopped (disabled)
           Public-Link        (ocf::heartbeat:ethmonitor):    Stopped (disabled)
      [root@nptprod2 pts]#
      
  10. If /opt/ibm (PTS-Software) and /var/nzrepl (PTS-Data) are mounted, unmount them.
    Ensure that they are not mounted anywhere or migrated to HA2. Also, ensure that the cluster service is stopped by running clustat.
  11. Mount /opt/ibm (PTS-Software) and /var/nzrepl (PTS-Data) manually on the PTS node, which was active before cluster services were stopped.
    To obtain the device mapper naming and file system type, run the df -kh/mount commands.

    Check with customer whether there are any special instructions and how these file systems are mounted.

    Example:
    mount -t extX -o rw,nobarrier,user_xattr /dev/mapper/PTS-Software-Grp-PTS-Software /opt/ibm 
    mount -t extX -o rw,nobarrier,user_xattr /dev/mapper/PTS-Data-Grp-PTS-Data /var/nzrepl
  12. Install PTS.
    1. If you are upgrading from 11.0.X to 11.2.X, uninstall the existing pts as root. Skip this step if you are not upgrading from 11.0.X to 11.2.X.
      /opt/ibm/nzpts/uninstallpts
    2. Change directories to pts and install PTS:
      • For RHEL 6.X:
        cd /tmp/pts
        ./installpts cman_cluster
      • For RHEL 7.X:
        cd /tmp/pts 
        ./installpts cluster
        or
        cd /tmp/pts
        bash ./installpts cluster
  13. Before you start the installation process on other cluster hosts, unmount the PTS-Software and PTS-Data volumes.
    umount /opt/ibm; umount /var/nzrepl 
    If you do not unmount the volumes, severe file system damage and data loss might occur.
  14. Repeat steps 1-13 on the replication log server Host 2 [HA2].
  15. Repeat steps 1-13 on the subordinate replication log server hosts (HA1 and HA2).
  16. Enable the PTS service only on both active nodes of the primary and subordinate log server.
    • For RHEL 6.X, run the following command.
      clusvcadm -e PTS-Service
      clustat
    • For RHEL 7.X, run the following command.
      pcs resource enable PTS-Service
      pcs resource

      You need to wait for a few minutes for the PTS-Service to start.

      Example:
      [root@ntzrplcprod1 pts]# pcs resource
       Resource Group: PTS-Service
           vgIBMrs    (ocf::heartbeat:LVM):   Started prodpt2
           vgREPDATArs        (ocf::heartbeat:LVM):   Started prodpt2
           ptsDataMount       (ocf::heartbeat:Filesystem):    Started prodpt2
           ptsSoftwareMount   (ocf::heartbeat:Filesystem):    Started prodpt2
           ptsSoftwareDatabase        (systemd:ptsdbd):       Started prodpt2
           ptsSoftware        (systemd:ptsd): Started prodpt2
           publicVirtualIP    (ocf::heartbeat:IPaddr2):       Started prodpt2
           nfsDaemon  (ocf::heartbeat:nfsserver):     Started prodpt2
           nfsNotify  (ocf::heartbeat:nfsnotify):     Started prodpt2
           Public-Link        (ocf::heartbeat:ethmonitor):    Started prodpt2
      [root@ntzrplcprod1 pts]#
      
  17. As the nz user, from the active node on the primary PTS, start PTS replication.
    ptsreplication -start –all
    Example:
    [nz@nptprod2 ~]$ ptsreplication -start -all
    1. Configured node ptcprod.abc.root.beta.rg:52573 to actively replicate.
    2. Configured node ptdplc.abc.root.beta.rg:52573 to actively replicate.
    ptsreplication complete
    [nz@nptprod2 ~]$
    
  18. As the nz user, run the command.
    ptstopology -list 
    Example:
    [nz@nptprod2 ~]$ ptstopology -list
     Node                             |  Type   |  Status |  Port   |  Clock differential  |  Network latency (ms)
    ----------------------------------+---------+---------+---------+----------------------+---------------------
    ptcprod.abc.root.beta.rg | local   | active  | 52573   | 00:00:00             | 0
    ptdplc.abc.root.beta.rg | remote  | active  | 52573   | 00:00:00             | 21
    [nz@nptprod2 ~]$

What to do next

Go to Upgrading Netezza Performance Server and Persistent Transient Storage (PTS) and follow steps 5-7 to complete the procedure.