IBM Support

Upgrade Docker version on the Information Server Microservices tier

How To


Summary

When upgrading the Microservices tier to version 11.7.1.1 or newer, the Docker component upgrade is not done automatically, due to the complexity of steps involved. This document describes the post-upgrade steps to make Docker up-to-date.

Objective

Expected Docker versions for each release are:
  • 19.03.5 for Information Server 11.7.1.1 and 11.7.1.2
  • 20.10.2 for Information Server 11.7.1.1 SP1
You can verify the current Docker version using the following command on each of the Microservices tier nodes:
$ docker --version
Docker version 18.06.1-ce, build e68fc7a
In case the version reported is older than expected, proceed with the steps described in this document.

Steps

Before starting Docker upgrade

The Docker upgrade procedure needs to be performed on one node at a time only. Therefore, it requires you to obtain the inventory aliases of the nodes that are part of the Microservices tier setup.
After logging in to the primary Microservices tier host, change your working directory to the Microservices tier installation directory and then discover the names of each Ansible host:
$ cd /opt/IBM/UGinstall/ugdockerfiles/
$ ansible -o -m setup -a 'filter=ansible_nodename' all
deployment_coordinator | SUCCESS => {"ansible_facts": {"ansible_nodename": "mltest-rhel-1.novalocal", ...
master-1 | SUCCESS => {"ansible_facts": {"ansible_nodename": "mltest-rhel-1.novalocal", ...
worker-1 | SUCCESS => {"ansible_facts": {"ansible_nodename": "mltest-rhel-2.novalocal", ...
worker-2 | SUCCESS => {"ansible_facts": {"ansible_nodename": "mltest-rhel-3.novalocal", ...
In the example listing above, inventory aliases are the first word in each line, and "ansible_nodename" displays the real hostname of each Microservices tier host.
Upgrading package dependencies
Docker package depends on a number of system packages. Ensure that the following packages are installed and upgraded to the most recent available versions for respective desired Docker version:
  • Docker 19.03.5 (11.7.1.1, 11.7.1.2) - bash.x86_64, container-selinux.noarch, device-mapper-libs.x86_64, glibc.x86_64, iptables.x86_64, libcgroup.x86_64, libseccomp.x86_64, systemd-libs.x86_64, systemd.x86_64, tar.x86_64, xz.x86_64
  • Docker 20.10.2 (11.7.1.1 SP1)  - bash.x86_64, container-selinux.noarch, device-mapper-libs.x86_64, fuse-overlayfs.x86_64, glibc.x86_64, iptables.x86_64, libcgroup.x86_64, libseccomp.x86_64, slirp4netns.x86_64, shadow-utils.x86_64, systemd.x86_64, systemd-libs.x86_64, tar.x86_64, xz.x86_64

For Information Server 11.7.1.1 SP1 and later (except 11.7.1.2):
Starting with Information Server 11.7.1.1 SP1, a dedicated Ansible playbook is available to perform an automated upgrade of the Docker service.
To start the upgrade, change your working directory to the Microservices tier installation directory and then run the upgrade_docker.yaml playbook:
$ cd /opt/IBM/UGinstall/ugdockerfiles/
$ ./run_playbook.sh playbooks/upgrade_docker.yaml --limit master-1
Repeat running the playbook for each of the node inventory aliases discovered earlier.


For Information Server 11.7.1.1 and 11.7.1.2

Upgrading Docker on these releases requires working simultaneously on both the Microservices tier master host and the host that the Docker service is to be upgraded. It's best to open two terminal sessions to quickly switch between the host shells. For single-node installations, a single session is enough.
For each of the above nodes requiring Docker upgrade, perform the following steps:
  1. On the master node, drain the chosen node by providing the discovered node name to the "kubectl drain" command. This stops all the application containers running on that node:
    $ kubectl drain mltest-rhel-2.novalocal --delete-local-data --ignore-daemonsets
    node/mltest-rhel-2.novalocal cordoned
    evicting pod "kafka-0"
    evicting pod "audit-trail-service-5c6bc7c677-lg8zk"
    evicting pod "cassandra-0"
    ...
    ...
    pod/ingress-nginx-controller-794bb7f58b-sppxz evicted
    pod/omag-6db4894cb-vffrz evicted
    pod/shop4info-event-consumer-88d76474b-v2mkr evicted
    node/mltest-rhel-2.novalocal evicted
  2. On the node being upgraded, stop kubelet service. Note: use of sudo may be omitted if logged-in as root:
    $ sudo systemctl stop kubelet.service
  3. On the node being upgraded, uninstall current Docker version:
    $ sudo yum remove docker-ce
  4. On the master node, run Docker setup playbook, limiting it to the Ansible alias of the node being upgraded. When prompted, verify that the list of hosts is expected (should only specify a single node) and confirm with "yes". The playbook will continue and should report no failed tasks at the end:
    $ ./run_playbook.sh playbooks/platform/kubernetes/setup_docker.yaml --limit worker-1
    [INFO]  Console log output file: /opt/IBM/UGinstall/ugdockerfiles/logs/setup_docker_2020_05_19_13_50_32.log
    [INFO]  Checking for Ansible...
    [INFO]  Ansible version is 2.9.5
    [PASS]  Ansible version is supported
    [INFO]  Ansible uses python2
    [INFO]  Checking for required python2 libraries...
        OK: Module netaddr was found
        OK: Module dns was found
    [PASS]  Required python2 libraries are installed
    
    
    [INFO]  Checking hosts connectivity...
    master-1 | SUCCESS => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "ping": "dest=localhost"}
    deployment_coordinator | SUCCESS => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "ping": "dest=localhost"}
    worker-1 | SUCCESS => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "ping": "dest=localhost"}
    worker-2 | SUCCESS => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "ping": "dest=localhost"}
    
    
    Do you want to proceed? (yes/no): yes
    
    
    ...
    ...
    
    
    RUNNING HANDLER [com/ibm/ugi/kubeplatform/docker : Restart Docker service] ****************************************************************************************************************************************
    Tuesday 19 May 2020  15:52:13 +0200 (0:00:00.411)       0:01:25.395 *********** 
    changed: [worker-1]
    
    
    PLAY RECAP ********************************************************************************************************************************************************************************************************
    worker-1                   : ok=10   changed=4    unreachable=0    failed=0    skipped=1    rescued=0    ignored=0   
    
    
    ...
    ...
  5. On the node being upgraded, verify that the Docker version is now correct:
    ​$ sudo docker --version
    Docker version 19.03.5, build 633a0ea
  6. When upgrading from releases before 11.7.1.1, on the node being upgraded, reconfigure kubelet service to use systemd driver for cgroups. To do this, edit the file /var/lib/kubelet/kubeadm-flags.env and remove the --cgroup-driver=cgroupfs option. Then, edit the file /var/lib/kubelet/config.yaml and set "cgroupDriver" property to "systemd".
  7. When the node being upgraded is the master node, start the container image registry:
    $ sudo docker start registry
    registry
  8. Finally, start the kubelet service on the node being upgraded:
    $ sudo systemctl start kubelet.service
  9. Back on the master node, wait for the node being upgraded to become "Ready", by taking a look at the output of the "kubectl get nodes" command:
    $ kubectl get nodes
    NAME                      STATUS                      ROLES    AGE   VERSION
    mltest-rhel-1.novalocal   Ready                       master   42d   v1.17.4
    mltest-rhel-2.novalocal   Ready,SchedulingDisabled    <none>   42d   v1.17.4
    mltest-rhel-3.novalocal   Ready                       <none>   42d   v1.17.4
  10. Finally, uncordon the node being upgraded:
    $ kubectl uncordon mltest-rhel-2.novalocal
    node/mltest-rhel-2.novalocal uncordoned
Repeat the steps for all the remaining nodes to be upgraded.

Document Location

Worldwide

[{"Business Unit":{"code":"BU059","label":"IBM Software w\/o TPS"},"Product":{"code":"SSZJPZ","label":"IBM InfoSphere Information Server"},"ARM Category":[],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"11.7.1","Line of Business":{"code":"LOB10","label":"Data and AI"}}]

Document Information

Modified date:
19 April 2021

UID

ibm16217329