IBM Support

Setting up Virtual IP address for two-node Db2 HADR Pacemaker cluster with Network Load Balancer on Google Cloud

How To


Summary

This document provides step-by-step procedures to set up an existing Db2 HADR configuration with Pacemaker on Google Cloud (GC) with virtual IP (VIP) addresses. VIPs establish communication between the Db2 database and the application and are routed by the Google Cloud Internal Load Balancer.

Objective

 The internal passthrough Network Load Balancer service with failover support routes the client traffic to the primary database in an Db2 HADR cluster.
GC passthrough Network Load Balancers use virtual IP (VIP) addresses, backend services, instance groups, and health checks to route the traffic. The Pacemaker cluster is configured to use the GC Load Balancer resource agent to respond to health checks to manage the transparent connection and failover to the Db2 databases in the cluster.
 

Note:
You have a Virtual Private Cloud network on Google Cloud. For instructions on configuring a VPC network and firewall rules, and instructions for setting up a NAT gateway or bastion host see: VPC networks  |  Google Cloud
To get more information on the internal Load Balancer, refer to: Internal passthrough Network Load Balancer overview  |  Load Balancing  |  Google Cloud.

The following figure shows a high-level overview of a 2-node Db2 HADR setup with virtual IP address. Clients access the database through the virtual IP address.

Db2 HADR setup with virtual IP address

 
In this document, we use two virtual machines in the cluster:
The hostnames of the machines are pcmkdb01 and pcmkdb02 and the HADR cluster is already set up and uses the ports 5951 and 5952 for communication between the HADR primary and standby databases. The Google Load Balancer uses the IP address 10.132.0.11.
The network traffic that is targeted for this IP address is redirected to either hosts pcmkdb01 or pcmkdb02 that are assigned to two GC instance groups.
The two instance groups must exist only once for the cluster.  You need to create objects like GC Load Balancer, GC Health Check, and the Pacemaker resources for load balancing for each virtual IP address.

The Google Load Balancer checks the availability of the hosts on port 60000 in our example. The Pacemaker cluster resource of type “gcp-ilb” responds to this request.
The configuration of the Pacemaker cluster ensures that this resource is running only once for a database and health check port and so network traffic is always redirected to one of the hosts. The application with a Db2 client communicates through the IP address 10.132.0.11 that acts as a virtual IP address.

Note:
The example describes how to set up a virtual IP address for a primary database in a Pacemaker cluster that contains one HADR pair only. If you are running multiple databases in the cluster, you need to set up a Google Load Balancer and the “gpc-ilb” Pacemaker resource for each database and you need to specify a different and unique port number. If you want to add a virtual IP address for the standby database, you also need a dedicated Load Balancer with a unique port number and name. However, Google Cloud Instance Groups exist only once for the whole cluster.

 

Environment

This document describes the optional feature of using a VIP connection.
As a prerequisite, make sure that you already did set up the Db2 HADR cluster.  Also, make sure that you created the Db2 integrated Pacemaker cluster manager and that you configured Pacemaker resources for the Db2 instance and its databases. Refer to the “Configuring high availability with the Db2 cluster manager utility (db2cm)” page of the IBM Documentation to deploy the automated HADR solution: Configuring high availability with the Db2 cluster manager utility (db2cm) – IBM Documentation.

The restrictions in the IBM Documentation page apply here: Restrictions on Pacemaker - IBM Documentation.

As a prerequisite, the virtual machines in the cluster must reside in the same region but can reside in different zones to increase the resiliency of the cluster. Also, the virtual IP address used must reside in the same region as the nodes in the cluster.

In addition, make sure that the GC guest environment is available on all nodes in the cluster. 
This guest environment is automatically deployed with each Google-provided public image and is set up automatically.
If you are using a custom image, ensure that the guest environment is set up according to the following documentation:
Guest environment  |  Compute Engine Documentation  |  Google Cloud.
In addition, you need to launch GC Cloud Shell and authorize the gcloud utility. This process is described here:
Launch Cloud Shell  |  Google Cloud.

To be able to set up the pacemaker cluster with VIP addresses, it is beneficial to follow a consistent namespace for the different components.
In our case, the names of the entities are derived from the hostnames in the cluster, the Db2 instance, and database name in the cluster. In this document, we use placeholders for the entities that you need to replace with the entities in your environment.  For every database with a primary or secondary virtual IP address you need to create a Google Load Balancer with a unique name and port.

Steps

To prepare the environment, you first need to set up the required infrastructure within the Google Cloud, followed by the Pacemaker resources in the virtual machines in the cluster.  The configuration needs to be done for each primary or standby database in the pacemaker cluster that requires a virtual IP address. When you do so, you must use a unique port for the health check and unique names for GC infrastructure and Pacemaker cluster components.
1. Decide on namespaces and IP addresses
Compile a list of all host names, including virtual host names, and update your DNS servers to enable proper IP address to host-name resolution. If a DNS server doesn't exist or you can't update and create DNS entries, you need to use the local host files. Make sure to include all the individual virtual machines that are participating in this scenario. For an introduction to DNS, refer to: Internal DNS  |  Compute Engine Documentation  |  Google Cloud
If you're using host files entries, make sure that the entries are applied to all virtual machines in the environment. Also, compile a list of names for the different entities required according to the example shown in the following table.
Naming Example
Entity Name
Google Cloud Project db2pcmk
Google Cloud Region europe-west1
Google Cloud Zone europe-west-1b
Google Cloud VPC Default
Google Cloud Subnet Default
Db2 Instance db2gp1
Db2 database GP1
Hostname/tag of cluster node 1 pcmkdb01
Hostname/tag of cluster node 2 pcmkdb02
IP address cluster node 1 10.132.0.2
IP address cluster node 2 10.132.0.3
Hostname for virtual IP address db2gp1gp1pvip
Virtual IP 10.132.0.11
Google Cloud virtual IP address db2gp1-gp1-primary-vip
Google Cloud health check db2gp1-gp1-primary-hc
Port number for the Google Cloud health check 60000
Google Cloud firewall rule db2gp1-gp1-primary-fwall
Google Cloud first instance group db2pcmk-db2pcmkdb01-group
Google Cloud second instance group db2pcmk-db2pcmkdb02-group
Google Cloud Load Balancer backend db2gp1-gp1-primary-ilb
Google Cloud forwarding rule db2gp1-gp1-primary-rule
Pacemaker Cluster
Pacemaker Load Balancer resource db2_db2gp1_db2gp1_GP1-primary-lbl
Pacemaker Load Balancer colocation constraint col_db2_db2gp1_db2gp1_GP1_primary-lbl
Pacemaker Load Balancer order rule ord_db2_db2gp1_db2gp1_GP1_primary-lbl
Pacemaker VIP resource db2_db2gp1_db2gp1_GP1-primary-VIP
Pacemaker VIP collocation constraint col_db2_db2gp1_db2gp1_GP1_primary-VIP
Pacemaker VIP order rule ord_db2_db2gp1_db2gp1_GP1_primary-VIP
 
2. Create Compute Engine instance groups.
In Cloud Shell, create an unmanaged instance group for each node in the cluster. To each instance group, you must assign one virtual machine out of the cluster. Note: The instance group is assigned to the Load Balancer backend service and not to the virtual machine directly.
Create the first instance group and assign the first node in the cluster:
gcloud compute instance-groups unmanaged create db2pcmk-db2pcmkdb01-group --zone=europe-west1-b 
gcloud compute instance-groups unmanaged add-instances db2pcmk-db2pcmkdb01-group --zone=europe-west1-b --instances=pcmkdb01 
Create the second instance group and assign the second node in the cluster:
gcloud compute instance-groups unmanaged create db2pcmk-db2pcmkdb02-group --zone=europe-west1-b 
gcloud compute instance-groups unmanaged add-instances db2pcmk-db2pcmkdb02-group --zone=europe-west1-b --instances=pcmkdb02 
Note:
You must create the instance groups only once for your HADR Cluster. The instance groups are used for all load balancers and virtual IP addresses used in the cluster.

3. Reserve an IP address for the virtual IP
In Cloud Shell, reserve and validate an IP address for each primary or standby database in the cluster.
gcloud compute addresses create db2gp1-gp1-primary-vip --project=db2pcmk2023 --addresses=10.132.0.11 --region=europe-west1 --subnet=default 
Once you created the IP address in the Google Cloud Backend, you can associate the IP address with a virtual hostname. You also can configure GCP DNS services or add the IP address in your local /etc/hosts file on the application host.
 
4. Create the Cloud Load Balancing health checks
In Cloud Shell, create and validate the health checks. To avoid conflicts with other services, designate a free port from the private range, 49152-65535. The check-interval and timeout values are adopted to increase failover tolerance during Compute Engine live migration events. You can adjust the values, if necessary.
gcloud compute health-checks create tcp db2gp1-gp1-primary-hc --port=60000 --proxy-header=NONE --check-interval=10 --timeout=10 --unhealthy-threshold=2 --healthy-threshold=2 

  

5. Create a firewall rule for the health check
Ensure that the virtual machines are defined with network tags, for example, use the host names as tags.
Refer to the following documentation on network tags: Add network tags.
To check the network tag definition of your virtual machines, use the following commands:
gcloud compute instances describe pcmkdb01 --format='table(name,status,tags.list())'
gcloud compute instances describe pcmkdb02 --format='table(name,status,tags.list())'
 
In Cloud Shell, define a firewall rule for a port in the private range that allows access to your host VMs from the IP ranges that are used by Cloud Load Balancing health checks, 35.191.0.0/16, and 130.211.0.0/22. For more information about firewall rules for Load Balancers, see: Creating firewall rules for health checks.
gcloud compute firewall-rules create db2gp1-gp1-primary-fwall --network=default --action=ALLOW --direction=INGRESS --source-ranges=35.191.0.0/16,130.211.0.0/22 --target-tags=pcmkdb01,pcmkdb02 --rules=tcp:60000 
Hint:
If you have multiple virtual IP addresses in your cluster, you can optionally create one single firewall rule for the complete cluster and add each Health Check Port to this firewall rule. This approach reduces the number of firewall rules but can decease clarity of the setup. Therefore, in this document, we use a dedicated firewall rule for each virtual IP address and we incorporated the database name and the HADR role in the firewall rule naming. The following example updates an existing firewall rule with the two ports
60000 and 60020.
gcloud compute --project=db2pcmk2023 firewall-rules update db2gp1-gp1-primary-fwall --rules=tcp:60000,tcp:60020
Note:
If you update a firewall rule, you have to specify both existing and new ports in the command.
 
 
6. Create internal passthrough Network Load Balancer
In Cloud Shell, create an internal network Load Balancer and add both Compute Engine instance groups as a backend to the Load Balancer.
Note: One of the Compute Engine instance groups need to be specified as a failover group.
Create the internal Load Balancer:
gcloud compute backend-services create db2gp1-gp1-primary-ilb --load-balancing-scheme internal --health-checks db2gp1-gp1-primary-hc --no-connection-drain-on-failover --drop-traffic-if-unhealthy --failover-ratio 1.0 --region europe-west1 --global-health-checks
Add the first instance group to the Load Balancer:
gcloud compute backend-services add-backend db2gp1-gp1-primary-ilb --instance-group db2pcmk-db2pcmkdb01-group --instance-group-zone europe-west1-b --region europe-west1
Add the second instance group as a failover group to the Load Balancer:
gcloud compute backend-services add-backend db2gp1-gp1-primary-ilb --instance-group db2pcmk-db2pcmkdb02-group --instance-group-zone europe-west1-b --failover --region europe-west1
 
 
7. Create the forwarding rule from the VIP to the backend service
In Cloud Shell, create forwarding rules for backend services:
gcloud compute forwarding-rules create db2gp1-gp1-primary-rule --load-balancing-scheme internal --address 10.132.0.11 --subnet default --region europe-west1 --backend-service db2gp1-gp1-primary-ilb --ports ALL
 
8. Test the Google Cloud Load Balancer setup
To verify that the GCP backend is set up correctly, we recommend checking load balancing before you continue to create the pacemaker resources, rules, and constraints.
To do so, on both virtual machines in the cluster use the
socat utility to respond to the health check.

To manually respond to the health check, execute as user root:
timeout 60s socat - TCP-LISTEN:60000,fork
 
Within the defined timeout of 60 seconds, check the status of the health check in the cloud shell.
gcloud compute backend-services get-health db2gp1-gp1-primary-ilb --region europe-west1 | grep 'healthState:\|ipAddress:'
The output of this test looks like the following example:
healthState: UNHEALTHY 
ipAddress: 10.132.0.2
healthState: HEALTHY
ipAddress: 10.132.0.3
After 60 seconds, the socat utility ends and the health state will revert to ‘UNHEALTHY’
 
 
9. Create the Load Balancer primitive on one of the nodes in the cluster
After the setup in the GC Infrastructure is completed, the Pacemaker cluster needs to be configured to listen on the GC health check.
To do so, the Load Balancer resource of type ‘
gcp-ilb’ needs to be created. The resource listens and responds to the GC health check and enables the GC backend to redirect the network traffic to the right node in the cluster. Repeat this step for each database in the cluster that needs to be reachable through the GC Load Balancer and a VIP. The resource will be part of an order rule and collocation constraint and started after the configuration is completed.
On one of the nodes in the cluster, use the db2cm command to create the load balance primitive, collocation constraint, and order rule. Issue the following command as user root:
db2cm -create -gc -primarylbl 60000 -db GP1 -instance db2gp1
 
10. Optionally: Enable communication between the cluster nodes
The steps that are described in the previous section enable the Pacemaker cluster with a VIP communication. The GC internal Load Balancer manages this communication and it works for all network traffic from any host that is not part of the cluster.
In this setup, the communication between any cluster node itself and the virtual IP addresses are always routed to the node itself in a local loopback.
This setup works in most cases when the application is running on a host that is not part of the cluster. If your application needs to run on one of the cluster nodes and you want the network traffic through the VIP, you need to change the default configuration for the GC guest environment.  In this case, create an extra cluster resource of type IPaddr2.
A) Configure the GCP Guest environment
You enable backend communication between the VMs by modifying the configuration of the google-guest-agent. This agent is included in the Linux guest environment for all Linux public images that are provided by Google Cloud.
To enable Load Balancer back-end communications, perform the following steps on each virtual machine that is part of your cluster.
Stop the GC guest service:
systemctl stop google-guest-agent
Open or create the file /etc/default/instance_configs.cfg for editing:
Search for the section IpForwarding  and NetworkInterfaces. If the sections don't exist, create them based on the following example.
To enable the communication through VIP and the GC internal Load Balancer set the target_instance_ips and ip_forwarding properties to false.
[IpForwarding] 
ethernet_proto_id = 66 
ip_aliases = true 
target_instance_ips = false 
 
[NetworkInterfaces] 
dhclient_script = /sbin/google-dhclient-script 
dhcp_command = 
ip_forwarding = false 
setup = true 
Once you made the change, reboot both servers to activate the change.
B) Create the Pacemaker resource for the virtual IP
After the setup in the GC guest environment is completed, the pacemaker cluster needs to be configured with the IPaddr2 resource. This step needs to be repeated for each database in the cluster that needs to be reachable through the GC Load Balancer and a VIP. The resource will be part of an order rule and collocation constraint and started after the configuration is completed.
On one of the nodes in the cluster, create the pacemaker resource virtual IP address.
To create pacemaker cluster resource for the virtual IP, issue the following db2cm command as user root:
db2cm -create -gc -primaryVIP 10.132.0.11  -db GP1 -instance db2gp1
Once you created the Pacemaker VIP resource, you can associate the IP address with a virtual hostname and configure GC DNS services. Alternatively you can add the VIP in your local /etc/hosts file on both hosts in the cluster.

Additional Information

Remove Load Balancer& virtual IP resources from the cluster
With the db2cm utility, you can add and remove resources in the cluster or remove the cluster completely.
If you want to delete the optional VIP Pacemaker resource, issue the following command as user root:
db2cm -delete -gc -primaryVIP -db GP1 -instance db2gp1
Once the resource is deleted from the cluster, open the file /etc/default/instance_configs.cfg for editing. Revert the changes for    target_instance_ips and ip_forwarding properties and set them to true. You must revert the changes on both nodes in the cluster.
Once you made the change, reboot both servers to clean up and activate the change.
To remove the GC Load Balancer resource, issue the following command as user root:
db2cm -delete -gc -primarylbl -db GP1 -instance db2gp1
If you permanently remove the virtual IP and Load Balancer from the cluster and you do not need the Load Balancer anymore, delete the Load Balancer and its related objects in the GC Console as well.

Document Location

Worldwide

[{"Type":"MASTER","Line of Business":{"code":"LOB10","label":"Data and AI"},"Business Unit":{"code":"BU058","label":"IBM Infrastructure w\/TPS"},"Product":{"code":"SSEPGG","label":"Db2 for Linux, UNIX and Windows"},"ARM Category":[{"code":"a8m3p0000006xc1AAA","label":"High Availability-\u003EPacemaker"}],"ARM Case Number":"","Platform":[{"code":"PF016","label":"Linux"}],"Version":"11.5.8;and future releases"}]

Document Information

Modified date:
14 November 2023

UID

ibm17071284