Instead of connecting directly to the head node of a Db2® Warehouse MPP cluster, you can connect to the head node
by using the HAProxy load balancer on a separate server. Using the HAProxy load balancer is optional
but recommended.
During a failover, such as due to a head node crash, the MPP head node role is transferred to one
of the data nodes so that processing can continue. If you are using HAProxy to connect to the head
node, HAProxy automatically connects to the new head node so that you can continue to use the
original IP address. You do not have to determine the new head node and connect by using its IP
address.
Before you begin
Deploy a Db2 Warehouse MPP cluster as described in
one of the subtopics of Deploying IBM Db2 Warehouse.
The operating system must be Centos 7.2 or later, and the VM cannot be part of the cluster.
Procedure
-
As root, log in to the VM that will host the HAProxy load balancer for the MPP cluster.
-
Install HAProxy:
- Install the necessary utilities for the HAProxy server by issuing the following
commands:
yum install -y make gcc perl pcre-devel zlib-devel
yum install -y openssl.x86_64 openssl-devel.x86_64 openssl-static.x86_64 httpd
- Download the latest version of the HAProxy package. A sample command follows. To determine the
latest version, go to the HAProxy download
page.
wget -O /tmp/haproxy.tgz http://www.haproxy.org/download/1.6/src/haproxy-1.6.6.tar.gz
- Extract the downloaded package by issuing the following
command:
tar -zxvf /tmp/haproxy.tgz -C /tmp
- Change directories, as follows:
cd /tmp/haproxy-*
- Install HAProxy by issuing the following
commands:
make TARGET=linux2628 USE_LINUX_TPROXY=1 USE_ZLIB=1 USE_REGPARM=1 USE_PCRE=1 USE_PCRE_JIT=1 USE_OPENSSL=1 SSL_INC=/usr/include SSL_LIB=/usr/lib
ADDLIB=-ldl CFLAGS="-O2 -g -fno-strict-aliasing -DTCP_USER_TIMEOUT=18"
make install
- Copy the necessary HAProxy files to the appropriate directories, as follows:
cp /usr/local/sbin/haproxy /usr/sbin/
cp /tmp/haproxy-1.6.6/examples/haproxy.init /etc/init.d/haproxy
- Create the necessary directories, as follows:
chmod 755 /etc/init.d/haproxy
mkdir -p /etc/haproxy
mkdir -p /run/haproxy
mkdir -p /var/lib/haproxy
touch /var/lib/haproxy/stats
- Add a user for HAProxy by issuing the following
command:
useradd -r haproxy
-
Verify that HAProxy was correctly installed by issuing the following command:
-
Configure the rsyslog utility by uncommenting the
$ModLoad
and $UDPServerRun
lines in the /etc/rsyslog.conf file, as
follows:
#==================================
# Provides UDP syslog reception
$ModLoad imudp
$UDPServerRun 514
#==================================
-
Restart the rsyslog utility by issuing the following command:
service rsyslog restart
HAProxy uses the
/var/log/haproxy.log file.
-
Create a /etc/haproxy/haproxy.cfg configuration file that contains the
information in the following example, with the following exceptions:
- Replace the server names and IP addresses with your own server names and IP addresses.
- In the
stats auth haproxy:temp4now
line, replace temp4now
with
your own password.
#=============================================================
# Beginning of /etc/haproxy/haproxy.cfg
#=============================================================
global
log 127.0.0.1 local2 #Log configuration
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
listen http_web
bind *:80
mode http
stats enable
stats hide-version
stats realm Haproxy\ Statistics
stats uri /stats
stats auth haproxy:temp4now
# [HTTPS Site Configuration]
listen https_web
bind *:8443
mode tcp
balance source# Load Balancing algorithm
reqadd X-Forwarded-Proto:\ http
server bluhelix50 169.53.136.142:8443 check
server bluhelix51 169.53.136.143:8443 check
server bluhelix52 169.53.136.145:8443 check
server bluhelix57 169.53.136.132:8443 check
server bluhelix58 198.11.214.169:8443 check
server bluhelix59 169.53.136.156:8443 check
# port 50000 access
listen bluhelix_port_50000
bind *:50000
mode tcp
balance source
reqadd X-Forwarded-Proto:\ http
server bluhelix50 169.53.136.142:50000 check
server bluhelix51 169.53.136.143:50000 check
server bluhelix52 169.53.136.145:50000 check
server bluhelix57 169.53.136.132:50000 check
server bluhelix58 198.11.214.169:50000 check
server bluhelix59 169.53.136.156:50000 check
# SSL port 50001 access
listen bluhelix_port_50001
bind *:50001
mode tcp
balance source
reqadd X-Forwarded-Proto:\ http
server bluhelix50 10.122.59.160:50001 check
server bluhelix51 10.122.59.131:50001 check
server bluhelix52 10.122.59.136:50001 check
server bluhelix57 10.122.59.139:50001 check
server bluhelix58 10.90.61.59:50001 check
server bluhelix59 10.122.59.152:50001 check
#=============================================================
# End of /etc/haproxy/haproxy.cfg
#=============================================================
-
Start the HAProxy service by issuing the following command:
-
Check the status of the HAProxy service by issuing the following command:
service haproxy status
The output should show that the status is Active: active (running), as shown in
the following example:
haproxy.service - SYSV: HA-Proxy is a TCP/HTTP reverse proxy which is particularly suited for high availability environments.
Loaded: loaded (/etc/rc.d/init.d/haproxy)
Active: active (running) since Wed 2016-07-13 19:21:12 CDT; 12min ago
-
Make the changes in the /etc/haproxy/haproxy.cfg file take effect:
- Issue the following command:
service haproxy restart
- If the Starting frontend GLOBAL: cannot bind UNIX socket [/run/haproxy/admin.sock] message is displayed, issue the following
commands:
mkdir -p /run/haproxy/
service haproxy restart
-
Test the HAProxy setup by attempting to access the web console through the load balancer VM IP
address, as follows:
https://IP_address_of_load_balancer_VM:8443
What to do next
If you want to determine the current head node of the MPP cluster, you can use the
following URL:
https://IP_address_of_load_balancer_VM/stats