Changing kernel parameter settings

Db2U is a dependency for some services. By default, Db2U runs with elevated privileges in most environments. However, depending on your Red Hat® OpenShift® Container Platform environment, you might be able to change the kernel parameter settings to allow Db2U to run with limited privileges.

Important: If you set up garbage collection on your Red Hat OpenShift Container Platform cluster, the garbage collection settings overwrite the settings that are described in this topic. Reverting the kernel parameter settings might impact services with a dependency on Db2U.
Installation phase
You are not here. Setting up a client workstation
You are not here. Collecting required information
You are here icon. Preparing your cluster
You are not here. Installing the Cloud Pak for Data platform and services
Who needs to complete this task?
A cluster administrator must complete this task.
When do you need to complete this task?
Review Determining what privileges Db2U runs with to determine whether you need to complete this task.

Determining what privileges Db2U runs with

Db2U is a dependency for the following services:

  • Db2®
  • Db2 Big SQL
  • Db2 Warehouse
  • Watson Knowledge Catalog
  • Watson Query

The options that are available to you are based on your Red Hat OpenShift Container Platform environment:

Managed OpenShift

You cannot change the node settings. You must allow Db2U to run with elevated privileges.

See Determining what configuration tasks are required to determine whether you need to take any additional steps based on the services that you plan to install.

Self-managed OpenShift

On-premises
You can either:
  • Allow Db2U to run with elevated privileges (default).
  • Change the kernel parameter settings so that Db2U can run with limited privileges.

See Determining what configuration tasks are required to determine whether you must take any additional steps based on the services that you plan to install.


IBM Cloud

If you install Cloud Pak for Data from the IBM Cloud Catalog, the kernel parameter settings are automatically applied to your cluster, and Db2U runs with limited privileges.

If you manually install Cloud Pak for Data, you can either:
  • Allow Db2U to run with elevated privileges (default).
  • Change the kernel parameter settings so that Db2U can run with limited privileges.
  • See Determining what configuration tasks are required to determine whether you must take any additional steps based on the services that you plan to install.


Amazon Web Services
You can either:
  • Allow Db2U to run with elevated privileges (default).
  • Change the kernel parameter settings so that Db2U can run with limited privileges.

See Determining what configuration tasks are required to determine whether you must take any additional steps based on the services that you plan to install.


Microsoft Azure
You can either:
  • Allow Db2U to run with elevated privileges (default).
  • Change the kernel parameter settings so that Db2U can run with limited privileges.

See Determining what configuration tasks are required to determine whether you must take any additional steps based on the services that you plan to install.


Google Cloud
You can either:
  • Allow Db2U to run with elevated privileges (default).
  • Change the kernel parameter settings so that Db2U can run with limited privileges.

See Determining what configuration tasks are required to determine whether you must take any additional steps based on the services that you plan to install.


Determining what configuration tasks are required

Use the following table to determine the appropriate configuration tasks to complete based on the services that you plan to install and whether Db2U will run with elevated privileges:

If you have any of these services Run Db2U with elevated privileges? Run Db2U with limited privileges?
  • Db2 Warehouse MPP
No additional configuration is required. Follow the procedure to change node settings by using the Node Tuning Operator.
  • Db2 Big SQL
  • Watson Query
No additional configuration is required. Not supported.
  • Db2
  • Db2 Warehouse SMP
No additional configuration is required. Follow the procedure to change the node settings by running the cpd-cli manage apply-db2-kubelet command.
Watson Knowledge Catalog On your client workstation, create a file called install-options.yml in the cpd-cli-workspace/olm-utils-workspace/work directory with the following content:
custom_spec:
  wkc:
    wkc_db2u_set_kernel_params: True
    iis_db2u_set_kernel_params: True

When you run the apply-cr command to install Watson Knowledge Catalog, specify --param-file=/tmp/work/install-options.yml.

Follow the procedure to change the node settings by running the cpd-cli manage apply-db2-kubelet command.

Changing node settings by running the cpd-cli manage apply-db2-kubelet command

You can use the cpd-cli manage apply-db2-kubelet command to set interprocess communication (IPC) kernel parameters if you want to run Db2U with limited privileges for Db2, Db2 Warehouse SMP, or Watson Knowledge Catalog.

Before you begin
Best practice: You can run the commands in this task exactly as written if you set up environment variables. For instructions, see Setting up installation environment variables.

Ensure that you source the environment variables before you run the commands in this task.

About this task
Complete this task if you plan to install one of the following services on an environment where you want to run Db2U with limited privileges:
  • Db2
  • Db2 Warehouse SMP
  • Watson Knowledge Catalog
The apply-db2-kubelet command makes the following changes to the cluster nodes:
allowedUnsafeSysctls:
  - "kernel.msg*"
  - "kernel.shm*"
  - "kernel.sem"
Procedure

To change the kernel parameter settings:

  1. Run the cpd-cli manage login-to-ocp command to log in to the cluster as a user with sufficient permissions to complete this task. For example:
    cpd-cli manage login-to-ocp \
    --username=${OCP_USERNAME} \
    --password=${OCP_PASSWORD} \
    --server=${OCP_URL}
  2. Run the following command to apply the kubelet configuration:
    cpd-cli manage apply-db2-kubelet
What to do next

Ensure that you complete the appropriate tasks based on the services that you plan to install on your cluster. The tasks must be completed after you install the services.

Service Post-installation requirements
Db2 After you install Db2, complete Configure Db2 to be deployed with limited privileges.
Db2 Warehouse SMP After you install Db2 Warehouse, Configure Db2 Warehouse to be deployed with limited privileges.
Watson Knowledge Catalog No additional configuration required.

Changing node settings by using the Node Tuning Operator

You can use the Red Hat OpenShift Node Tuning Operator to set interprocess communication (IPC) kernel parameters if you want to run Db2U with limited privileges for Db2 Warehouse MPP.

Before you begin

Decide whether you plan to deploy the services on dedicated nodes. With dedicated nodes, you can limit node tuning to the nodes where the service or services will run.

For more information about setting up dedicated nodes, see Setting up dedicated nodes for your Db2 Warehouse deployment.

About this task

Complete this task if you plan to install Db2 Warehouse MPP on an environment where you want to run Db2U with limited privileges.


What is the Node Tuning Operator?

The Node Tuning Operator helps you manage node-level tuning by orchestrating the tuned daemon. Tuned is a system tuning service for Linux®. The core of Tuned are profiles, which tune your system for different use cases. In addition to static application of system settings, Tuned can also monitor your system and optimize the performance on-demand based on the profile that is applied.

Tuned is distributed with a number of predefined profiles. However, it is also possible to modify the rules defined for each profile and customize how and what to tune. Tuned supports various types of system configuration such as sysctl, sysfs, and kernel boot parameters. For more information, see Monitoring and managing system status and performance and The Tuned Project

The Node Tuning Operator provides a unified management interface to users of node-level sysctls and gives more flexibility to add custom tuning.

The operator manages the containerized tuned daemon for Red Hat OpenShift Container Platform as a Kubernetes DaemonSet. It ensures the custom tuning specification is passed to all containerized tuned daemons that run in the cluster in the format that the daemons understand. The daemons run on all nodes in the cluster, one per node.

The Node Tuning Operator is part of a standard Red Hat OpenShift Container Platform installation. For more information, see Using the Note Tuning Operator in the Red Hat OpenShift documentation:

Procedure

You can employ the Node Tuning Operator by using one of the following methods:

Creating a custom resource file
The custom resource method requires you to manually compute all required IPC kernel parameters.
Creating a shell script
The shell script enables you to generate a YAML file that you can install, deploy, and run on the target OpenShift cluster.

The shell script automatically calculates the required IPC kernel parameters for you.


Creating a custom resource file
  1. Create a Tuned custom resource file.

    Use the following guidance to adjust the contents of the custom resource file:

    • You must compute the values for the IPC kernel parameters that are denoted by < ...>. Use the formulas in Kernel parameter requirements (Linux).
    • Use the memory.resource limit that you plan to apply to the deployment as size of RAM if your Kubernetes worker node pool is heterogeneous.
    • The match label icp4data and the corresponding value is only required for dedicated deployments.
      • If you use dedicated nodes, the IPC kernel tuning is applied only on the labeled worker nodes.
      • If you don't use dedicated nodes, remove the following lines from the custom resource:
            - label: icp4data
              value: database-db2wh
    • The custom resource injects the IPC sysctl changes on top of the default tuned profile settings on the OpenShift worker nodes.

    The following sample YAML file describes the basic structure that is needed to create the custom resource for a Node Tuning Operator instance that can tune IPC kernel parameters.

    apiVersion: tuned.openshift.io/v1
    kind: Tuned
    metadata:
      name: db2u-ipc-tune
      namespace: openshift-cluster-node-tuning-operator
    spec:
      profile:
      - name: openshift-db2u-ipc
        data: |
          [main]
          summary=Tune IPC Kernel parameters on OpenShift nodes running Db2U engine PODs
          include=openshift-node
    
          [sysctl]
          kernel.shmmni = <shmmni>
          kernel.shmmax = <shmmax>
          kernel.shmall = <shmall>
          kernel.sem = <SEMMSL> <SEMMNS> <SEMOPM> <SEMMNI>
          kernel.msgmni = <msgmni>
          kernel.msgmax = <msgmax>
          kernel.msgmnb = <msgmnb>
    
      recommend:
      - match:
        - label: node-role.kubernetes.io/worker
        - label: icp4data
          value: database-db2wh
        priority: 10
        profile: openshift-db2u-ipc
  2. Save the custom resource as a YAML file. For example: /tmp/Db2UnodeTuningCR.yaml.
  3. Log in to the cluster as a cluster administrator and create the custom resource:
    oc create -f /tmp/Db2UnodeTuningCR.yaml

It might take a few minutes for the custom resource to be created and for the custom IPC tuned profile to be applied on the worker nodes.



Creating a shell script

The following sample shell script can be used to do the following:

  • Generate a YAML file that you can install and deploy and run on the target OpenShift cluster.
  • Delete the custom resource and clean up deployed tuned profiles.

The sample assumes that the script is saved as /root/script/crtNodeTuneCR.sh.

#!/bin/bash

# Compute IPC kernel parameters as per IBM Documentation topic
# https://www.ibm.com/support/knowledgecenter/SSEPGG_11.1.0/com.ibm.db2.luw.qb.server.doc/doc/c0057140.html
# and generate the Node Tuning Operator CR yaml.

tuned_cr_yaml="/tmp/Db2UnodeTuningCR.yaml"
mem_limit_Gi=0
node_label=""
cr_name="db2u-ipc-tune"
cr_profile_name="openshift-db2u-ipc"
cr_namespace="openshift-cluster-node-tuning-operator"
create_cr="false"
delete_cr="false"

usage() {
    cat <<-USAGE #| fmt
    Usage: $0 [OPTIONS] [arg]

    OPTIONS:
    =======
    * -m|--mem-limit mem_limit  : The memory.limit (Gi) to be applied to Db2U deployment.
    * [-l|--label node_label]   : The node label to use for dedicated Cp4D deployments.
    * [-f|--file yaml_output]   : The NodeTuningOperator CR YAML output file. Default /tmp/Db2UnodeTuningCR.yaml.
    * [-c|--create]             : Create the NodeTuningOperator CR ${cr_name} using the generated CR yaml file.
    * [-d|--delete]             : Delete the NodeTuningOperator CR ${cr_name}.
    * [-h|--help]               : Display the help text of the script.
USAGE
}

[[ $# -lt 1 ]] && { usage && exit 1; }

while [[ $# -gt 0 ]]; do
    case "$1" in
        -f|--file) shift; tuned_cr_yaml=$1
        ;;
        -m|--mem-limit) shift; mem_limit_Gi=$1
        ;;
        -l|--label) shift; node_label=$1
        ;;
        -c|--create) create_cr="true"
        ;;
        -d|--delete) delete_cr="true"
        ;;
        -h|--help) usage && exit 0
        ;;
        *) usage && exit 1
        ;;
	esac
	shift
done

((ram_in_BYTES=mem_limit_Gi * 1073741824))
((ram_GB=ram_in_BYTES / (1024 * 1024 * 1024)))
((IPCMNI_LIMIT=32 * 1024))
tr ' ' '\n' < /proc/cmdline | grep -q ipcmni_extend && ((IPCMNI_LIMIT=8 * 1024 * 1024))

#
### =============== functions ================ ###
#
# Compute the required kernel IPC parameter values
compute_kernel_ipc_params() {
    local PAGESZ=$(getconf PAGESIZE)

    # Global vars
    ((shmmni=256 * ram_GB))
    shmmax=${ram_in_BYTES}
    ((shmall=2 * (ram_in_BYTES / PAGESZ)))
    ((msgmni=1024 * ram_GB))
    msgmax=65536
    msgmnb=${msgmax}
    SEMMSL=250
    SEMMNS=256000
    SEMOPM=32
    SEMMNI=${shmmni}

    # RH bugzilla https://access.redhat.com/solutions/4968021. Limit SEMMNI, shmmni and msgmni to the max
    # supported by the Linux kernel -- 32k (default) or 8M if kernel boot parameter 'ipcmni_extend' is set.
    ((SEMMNI=SEMMNI < IPCMNI_LIMIT ? SEMMNI : IPCMNI_LIMIT))
    ((shmmni=shmmni < IPCMNI_LIMIT ? shmmni : IPCMNI_LIMIT))
    ((msgmni=msgmni < IPCMNI_LIMIT ? msgmni : IPCMNI_LIMIT))
}

# Generate NodeTuning Operator YAML file
gen_tuned_cr_yaml() {
    # Generate YAML file for NodeTuning CR and save as ${tuned_cr_yaml}
    cat <<-EOF > ${tuned_cr_yaml}
apiVersion: tuned.openshift.io/v1
kind: Tuned
metadata:
  name: ${cr_name}
  namespace: ${cr_namespace}
spec:
  profile:
  - name: ${cr_profile_name}
    data: |
      [main]
      summary=Tune IPC Kernel parameters on OpenShift nodes running Db2U engine PODs
      include=openshift-node

      [sysctl]
      kernel.shmmni = ${shmmni}
      kernel.shmmax = ${shmmax}
      kernel.shmall = ${shmall}
      kernel.sem = ${SEMMSL} ${SEMMNS} ${SEMOPM} ${SEMMNI}
      kernel.msgmni = ${msgmni}
      kernel.msgmax = ${msgmax}
      kernel.msgmnb = ${msgmnb}

  recommend:
  - match:
    - label: node-role.kubernetes.io/worker
EOF

    # Add the optional dedicated label into match array
    if [[ -n "${node_label}" ]]; then
        cat <<-EOF >> ${tuned_cr_yaml}
    - label: icp4data
      value: ${node_label}
EOF
    fi

    # Add the priority and profile keys
    cat <<-EOF >> ${tuned_cr_yaml}
    priority: 10
    profile: ${cr_profile_name}
EOF

    [[ "${create_cr}" == "true" ]] && return
    cat <<-MSG
===============================================================================
* Successfully generated the Node Tuning Operator Custom Resource Definition as
  ${tuned_cr_yaml} YAML with Db2U specific IPC sysctl settings.

* Please run 'oc create -f ${tuned_cr_yaml}' on the master node to
  create the Node Tuning Operator CR to apply those customized sysctl values.
===============================================================================
MSG
}

create_tuned_cr() {
    echo "Creating the Node Tuning Operator Custom Resource for Db2U IPC kernel parameter tuning ..."
    oc create -f ${tuned_cr_yaml}
    sleep 2

    # List the NodeTuning CR and describe
    oc -n ${cr_namespace} get Tuned/${cr_name}
    echo ""

    echo "The CR of the Node Tuning Operator deployed"
    echo "--------------------------------------------"
    oc -n ${cr_namespace} describe Tuned/${cr_name}
    echo ""
}

delete_tuned_cr() {
    echo "Deleting the Node Tuning Operator Custom Resource used for Db2U IPC kernel parameter tuning ..."
    oc -n ${cr_namespace} get Tuned/${cr_name} --no-headers -ojsonpath='{.kind}' | grep -iq tuned || \
        { echo "No matching CR found ..." && exit 0; }
    oc -n ${cr_namespace} delete Tuned/${cr_name}
    echo ""
    sleep 2

    # Get the list of containerized tuned PODs (DaemonSet) deployed on the cluster
    local tuned_pods=( $(oc -n ${cr_namespace} get po --selector openshift-app=tuned --no-headers -ojsonpath='{.items[*].metadata.name}') )
    # Remove the tuned profile directory deployed on those PODs
    for p in "${tuned_pods[@]}"; do
        echo "Removing the installed tuned profile ${cr_profile_name} on POD: $p"
        oc -n ${cr_namespace} exec -it $p -- bash -c "rm -fr /etc/tuned/${cr_profile_name}"
    done
    echo ""
}

#
### ================== Main ==================== ###
#

[[ "${delete_cr}" == "true" ]] && { delete_tuned_cr && exit 0; }

compute_kernel_ipc_params

gen_tuned_cr_yaml

[[ "${create_cr}" == "true" ]] && create_tuned_cr

What to do next

After you install Db2 Warehouse, complete Configuring Db2 Warehouse to be deployed with limited privileges.