Deploying IBM Db2 Warehouse MPP on Linux and Amazon Web Services

You can deploy Db2® Warehouse on AWS, which is a secure cloud services platform.

You can use either Amazon Elastic File System (EFS) or Amazon Elastic Block Store (EBS) for storage when you deploy on AWS. However, the instructions in this topic are based on EFS because it provides the following advantages:
  • Managing storage is easier with EFS.
  • EFS provides faster and easier scalability.
  • EFS is a ready-to-use cluster file system. A cluster file system is required for an AWS deployment. If you want to use EBS, you must install and configure a separate cluster file system.
  • Managing HA is easier with EFS. If you use EBS, you must remount the storage volumes that are attached to individual nodes.

Before you begin

Set up an EFS file system and mount it on Amazon Elastic Compute Cloud (EC2) instances by using a dedicated host. Use the instructions on the Amazon Elastic File System (Amazon EFS) page, but apply the following changes to the section on launching two instances and mounting an EFS file system:
  • On the Choose an Amazon Machine Image page, select an Amazon Linux® AMI with 8 cores and 64 GB.
  • On the Configure Instance Details page, type 3 (not 2) in the Number of instances field.
  • After configuring the instance details, on the Add Storage page, enter 30 GB for the root volume.

Ensure that you meet the prerequisites described in Getting container images.

Procedure

  1. Connect to each of the three EC2 instances that you created. For information, see Connect to Your Linux Instance.
  2. Ensure that you have root authority on the host operating system.
  3. Refer to Configuration options and note any options whose default settings need to be overridden. Later in this procedure, you will be instructed to specify the new settings.
  4. Log in to Docker by using your API key:
    echo <apikey> | docker login -u iamapikey --password-stdin icr.io
    where <apikey> is the API key that you created as a prerequisite in Getting container images.
  5. Ensure that the dates and times on all of your node hosts are synchronized.
  6. In the file system that the nodes share, create a nodes configuration file with the name /mnt/clusterfs/nodes.
    This file specifies, for each node, the node’s type, host name, and IP address in the form node_type=node_hostname:node_IP_address. For the host name, specify the short name that is returned by the hostname -s command, not the fully qualified domain name. For example, the following file defines a three-node cluster:
    
    head_node=test27:160.08.675.309 
    data_node1=test28:161.08.675.309
    data_node2=test29:162.08.675.309
  7. Issue the following docker run command concurrently on all node hosts. Do not attempt to wait until the command finishes running on one node host before issuing it on another. This command pulls, creates, and initializes the latest Db2 Warehouse container on each node host.
    Note: If necessary, add to the following command one -e parameter for each configuration option that is to be set during deployment. See Configuration options for more information. For example, both enable Spark and use row-organized storage, include the following option settings in your docker run command:
    -e DISABLE_SPARK=NO -e TABLE_ORG=ROW
    Issue the following docker run or command:
    docker run -d -it --privileged=true --net=host --name=Db2wh -v /mnt/clusterfs:/mnt/bludata0 -v /mnt/clusterfs:/mnt/blumeta0 icr.io/obs/hdm/db2wh_ee:v11.5.6.0-db2wh-linux
  8. On the head node, issue the following command to check whether the deployment is progressing successfully:
    docker logs --follow Db2wh
  9. After the deployment finishes, a message indicates the web console URL and login information. Note this information for later.
  10. Exit the Docker logs by pressing Ctrl+C.
  11. On the head node, set a new password for the bluadmin user by issuing either the following command:
    docker exec -it Db2wh setpass new_password
  12. On the head node host, log in to the web console using the web console URL that was displayed in Step 9. The URL has the form https://head_node_IP_address:8443.