Upgrading from IBM® Cloud Private-CE Version 2.1.0.3 to 3.1.0

You can upgrade IBM Cloud Private-CE from version 2.1.0.3 to 3.1.0.

You can upgrade from only version 2.1.0.3. If you use an earlier version of IBM Cloud Private-CE, you must upgrade to version 2.1.0.3 first. See Upgrading and reverting in the IBM Cloud Private Version 2.1.0.3 documentation.

You can only upgrade from one version of IBM Cloud Private-CE to another version of IBM Cloud Private-CE. You cannot upgrade IBM Cloud Private-CE to IBM Cloud Private Cloud Native or Enterprise editions.

During the upgrade process, you can't access the IBM Cloud Private management console. You also can't set cloud provider options, such as configuring a vSphere Cloud Provider, or choose to use NSX-T.

  1. Log in to the boot node as a user with root permissions. The boot node is usually your master node. For more information about node types, see Architecture. During installation, you specify the IP addresses for each node type.
  2. Pull the IBM Cloud Private-CE installer image from Docker Hub.

    sudo docker pull ibmcom/icp-inception:3.1.0
    
  3. Create an installation directory and copy the cluster directories from the previous installation directory to the new IBM Cloud Private cluster folder. Use a different installation directory than you used for the previous version. For example, to store the configuration files in /opt/ibm-cloud-private-3.1.0, run the following commands:

    mkdir -p /opt/ibm-cloud-private-3.1.0
    cd /opt/ibm-cloud-private-3.1.0
    cp -r /<installation_directory>/cluster .
    

    Note: /<installation_directory> is the full path to your version 2.1.0.3 installation directory, and /<new_installation_directory> is the full path to your version 3.1.0 installation directory.

  4. Manually update the /<new_installation_directory>/cluster/config.yaml file.

    • In the 3.1.0 release, disabled_management_services is converted into a dict parameter management_services to allow fine-grained control services. If you have disabled_maangement_services in your cluster/config.yaml file, you need to update the format from the old to the new format and then delete old disabled_maangement_services in your cluster/config.yaml. For example:

      Old format:

      disabled_management_services: ["istio", "vulnerability-advisor", "custom-metrics-adapter"]
      

      New format:

      management_services:
       istio: disabled
       vulnerability-advisor: disabled
       custom-metrics-adapter: disabled
      

      Note: The vulnerability-advisor and istio parameters are disabled by default in the 3.1.0 release. If you enabled vulnerability-advisor and istio in the 2.1.0.3 release, then you need to explicitly enable them in the following format:

      management_services:
       vulnerability-advisor: enabled
       istio: enabled
      
    • In the 3.1.0 release, we don't support an upgrade for audit-logging charts and storage-glusterfs charts. You have to disable audit-logging charts in management_services:

      management_services:
      audit-logging: disabled
      storage-glusterfs: disabled
      

      The audit logging chart can be installed after capacity planning for auditing is done. It also requires that logging is deployed to the kube-system namespace with security enabled. In 2.1.0.3, logging is deployed to the kube-system namespace without security enabled. So, to deploy an audit-logging chart, logging has to be uninstalled and then re-installed from 3.1.0.

      The following commands can be used to uninstall and install logging:

      helm delete --purge logging --tls
      helm install stable/ibm-icplogging --name logging --namespace kube-system   --tls
      

      The following command can be used to install audit-logging:

      helm install audit-logging --name audit-logging --namespace kube-system --tls

    • For IBM Cloud Private version 3.1.0 the management services that are disabled by default have changed. For the new default list, see General settings. You can add the additional services that you want disabled to this new default list.

    • For high availability clusters, the vip_manager option is etcd by default in 3.1.0. If you change it to either keepalived or ucarp, the cluster will experience a brief outage for several seconds while the new virtual IP manager takes over the assignment of the address.

    • In the 3.1.0 release, a new format for the settings to configure the Docker runtime is used. If you have customized the configuration options for the Docker runtime in 2.1.0.3, then you should migrate your settings to the new format. For example:

      # Docker configuration option, more options see
      # https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-configuration-file
      docker_config:
       log-opts:
         max-size: "100m"
         max-file: "10"
      
      # Docker environment setup
      docker_env:
       - HTTP_PROXY=http://1.2.3.4:3128
       - HTTPS_PROXY=http://1.2.3.4:3128
       - NO_PROXY=localhost,127.0.0.1,{{ cluster_CA_domain }}
      
      # Install/upgrade docker version
      docker_version: 18.03.1
      
      # Install Docker automatically or not
      install_docker: true
      
  5. Deploy your environment by completing the following steps:

    1. Change to the cluster folder in your installation directory.
      cd /<new_installation_directory>/cluster
      
    2. Prepare the cluster for upgrade.

      sudo docker run -e LICENSE=accept --net=host --rm -t -v "$(pwd)":/installer/cluster \
      ibmcom/icp-inception:3.1.0 upgrade-prepare
      

      If the cluster preparation fails, review the error message and resolve any issues. Then, remove the cluster/.install.lock file, and run the upgrade-prepare command again.

    3. Upgrade Kubernetes.

      sudo docker run -e LICENSE=accept --net=host --rm -t -v "$(pwd)":/installer/cluster \
      ibmcom/icp-inception:3.1.0 upgrade-k8s
      
      • If the Kubernetes upgrade fails with a different message, review the error message and resolve any issues. Then, rollback the Kubernetes services and run the upgrade Kubernetes services command again.
    4. Upgrade chart.

      sudo docker run -e LICENSE=accept --net=host --rm -t -v "$(pwd)":/installer/cluster \
      ibmcom/icp-inception:3.1.0 upgrade-chart
      
      • If the chart upgrade fails with a different message, review the error message and resolve any issues. Then, re-run the upgrade chart command again.
  6. Verify the status of your upgrade.

    • If the upgrade succeeded, the access information for your cluster is displayed.

      In the URL https://master_ip:8443, master_ip is the IP address of the master node for your IBM Cloud Private cluster.

      Note: If you created your cluster within a private network, use the public IP address of the master node to access the cluster.

    • If you encounter errors, see Troubleshooting.

  7. Clear your browser cache.

  8. If you have either applications that use GPU resources or a resource quota for GPU resources, you need to manually update the application or resource quota with the new GPU resource name nvidia.com/gpu.

    • For applications that use GPU resources, follow the steps in Creating a deployment with attached GPU resources to run a sample GPU application. For your own GPU application, you need to update the application to use the new GPU resource name nvidia.com/gpu. For example, to update the deployment properties, you can use either the management console (see Modifying a deployment) or the kubectl CLI.
    • To update the resource quota for GPU resources, follow the steps in Setting resource quota to set a resource quota for your namespace. For upgrading, you need to update the resource quota to use the GPU resource name nvidia.com/gpu. For example, you can set the GPU quota to requests.nvidia.com/gpu: "2".
  9. Access your cluster. From a web browser, browse to the URL for your cluster. For a list of supported browsers, see Supported browsers.

  10. Ensure that all the IBM Cloud Private default ports are open. For more information about the default IBM Cloud Private ports, see Default ports.

  11. Back up the boot node. Copy your /<new_installation_directory>/cluster directory to a secure location.