Upgrading API Connect subsystems in a VMware environment
Complete the following steps to upgrade API Connect subsystems.
Before you begin
- You cannot upgrade directly to FP24 from versions older than FP20; you must upgrade to FP20 first.
- Upgrading to FP20 from any version older than FP15 requires that you first upgrade to at least one version in the FP15-to-FP19 range.
- Ensure that you have met the requirements for upgrading API Connect subsystems in a VMware environment. See Requirements for upgrading on VMware.
- Ensure that you are upgrading to the latest Fix Pack version. These instructions apply to upgrading to the latest Fix Pack version. To access the Fix Packs, see the link in What's New in the latest release.
About this task
When you apply an upgrade, the new level of the subsystem overwrites the existing level. Your user configuration, APIs, Products, and subsystem configurations (Management, Analytics, and Developer Portal) are retained.
- A role that had the
Api-Drafts:View
permission, but not theApi-Drafts:Manage
permission, has theProduct-Drafts:View
permission added if not already present. - A role that has both the
Api-Drafts:View
andApi-Drafts:Manage
permissions has theProduct-Drafts:View
andProduct-Drafts:Manage
permissions added if not already present.
The permission settings for custom roles are not changed.
For more information on user roles, and assigning permissions to roles, see API Connect user roles and Creating custom roles.
Procedure
- Verify the API Connect deployment is healthy and fully
operational:
Version to upgrade from Instructions v2018.4.1.6 or later See Checking cluster health on VMware When upgrading from v2018.4.1.6 or later to v2018.4.1.10 or later, this step is optional because a health check is run automatically as part of the
apicup subsys install
command in step 7. Note that you might still want to run the health check now, as preparation for the manual backup in step 2.v2018.4.1.5 or earlier See Determining status of a cluster on VMware - Complete a manual backup of the API Connect subsystems. See Backing up and restoring.
- If necessary, prepare your management database for the upgrade:
Version to upgrade from Instructions v2018.4.1.8 or later No preparation required. Skip this step, go directly to Step 4 v2018.4.1.7 or earlier Due to schema changes, upgrade of the management database takes longer than previous upgrades. For long established deployments with a large amount of data, such as over 10 GB, upgrade can take as long as several hours. To reduce this time, use the
apicops
interface to truncate the subscriber event table and remove unused snapshots.Download the latest release of theapicops
interface from https://github.com/ibm-apiconnect/apicops/releases, and run the following commands:apicops subscriber-queues:clear apicops snapshots:clean-up
-
Download the appropriate images from IBM® Fix Central.
To access the Fix Packs, see the link in What's New in the latest release. On the Fix Pack page, select the version you want to install. When the version contents are displayed, access the files by clicking the link
Status: Available
.The upgrade files are distributed in compressed tar format. The filename structure is:
upgrade_management_lts_<version>.tgz upgrade_analytics_lts_<version>.tgz upgrade_portal_lts_<version>.tgz
- If necessary, download from the same Fix Pack page any Control
Plane files that are needed.
Control Plane files provide support for specific Kubernetes versions. The file
upgrade_management_lts_<version>.tgz
contains the latest Control Plane file. An upgrade from the most recent API Connect version to the current version does not need a separate Control Plane file. However, when upgrading from older versions of API Connect, you must install one or more control plane files to ensure that all current Kubernetes versions are supported.Consult the following table to see if your deployment needs one or more separate Control Plane files.
Note: If you want to upgrade to an older fix pack than the current release, see the Control Plane lists in Control planes needed for upgrading to earlier fix packs.Version to upgrade from Instructions for upgrading to v2018.4.1.24 v2018.4.1.20 and iFixesNo control plane needed. v2018.4.1.19 and iFixesv2018.4.1.18 was not releasedDownload: appliance-control-plane-1.23.x.tgz
appliance-control-plane-1.22.x.tgz
v2018.4.1.17 and iFixesDownload: appliance-control-plane-1.23.x.tgz
appliance-control-plane-1.22.x.tgz
appliance-control-plane-1.21.x.tgz
v2018.4.1.16 and iFixesDownload: appliance-control-plane-1.23.x.tgz
appliance-control-plane-1.22.x.tgz
appliance-control-plane-1.21.x.tgz
appliance-control-plane-1.20.x.tgz
appliance-control-plane-1.19.x.tgz
v2018.4.1.15 and iFixesDownload: appliance-control-plane-1.23.x.tgz
appliance-control-plane-1.22.x.tgz
appliance-control-plane-1.21.x.tgz
appliance-control-plane-1.20.x.tgz
appliance-control-plane-1.19.x.tgz
v2018.4.1.13 and iFixesDownload: appliance-control-plane-1.23.x.tgz
appliance-control-plane-1.22.x.tgz
appliance-control-plane-1.21.x.tgz
appliance-control-plane-1.20.x.tgz
appliance-control-plane-1.19.x.tgz
appliance-control-plane-1.18.x.tgz
- v2018.4.1.12 and iFixes
- v2018.4.1.11 and iFixes
- v2018.4.1.10 and iFixes
Download: appliance-control-plane-1.23.x.tgz
appliance-control-plane-1.22.x.tgz
appliance-control-plane-1.21.x.tgz
appliance-control-plane-1.20.x.tgz
appliance-control-plane-1.19.x.tgz
appliance-control-plane-1.18.x.tgz
appliance-control-plane-1.17.x.tgz
- v2018.4.1.9 and iFixes
- v2018.4.1.8 and iFixes
- v2018.4.1.7 and iFixes
Download: appliance-control-plane-1.23.x.tgz
appliance-control-plane-1.22.x.tgz
appliance-control-plane-1.21.x.tgz
appliance-control-plane-1.20.x.tgz
appliance-control-plane-1.19.x.tgz
appliance-control-plane-1.18.x.tgz
appliance-control-plane-1.17.x.tgz
appliance-control-plane-1.16.x.tgz
appliance-control-plane-1.15.x.tgz
- v2018.4.1.6 and iFixes
- v2018.4.1.5 and iFixes
Download: appliance-control-plane-1.23.x.tgz
appliance-control-plane-1.22.x.tgz
appliance-control-plane-1.21.x.tgz
appliance-control-plane-1.20.x.tgz
appliance-control-plane-1.19.x.tgz
appliance-control-plane-1.18.x.tgz
appliance-control-plane-1.17.x.tgz
appliance-control-plane-1.16.x.tgz
appliance-control-plane-1.15.x.tgz
appliance-control-plane-1.14.x.tgz
- Control Plane files are distributed on the same Fix Pack page as the API Connect distribution files.
- You will install the Control Plane files with each subsystem as part of Step 7.
-
Download the version of Install Assist (
apicup
) that matches your upgrade version of API Connect. -
For each API Connect subsystem, in turn, run the
install
command. The syntax is:apicup subsys install [SUBSYS-NAME] [path_to_subsystem_upgrade_tar_archive]
- When you run the install, the program sends the compressed tar file, which contains the upgrade images, to all cluster members. The compressed tar file is about 2 GB, and transfer can take some time. When the install command exits, the compressed tar file has arrived at each member. The upgrade process is now underway, and might continue for some time depending on factors such as the size of your deployment, your network speed, etc.
- If you downloaded Control Plane files in Step 5, install them at the same time as each subsystem. The syntax is:
apicup subsys install [SUBSYS-NAME] [path_to_subsystem_upgrade_tar_archive] [path_to_control_plane_file]
- You must install the Control Plane file and the upgrade file for the subsystem at the same time.
You can install multiple Control Plane files with one
apicup subsys install
command. The ordering of files passed toapicup subsys install
does not matter.- For example, to upgrade the management subsystem from
2018.4.1.7
to2018.4.1.12
:$ apicup subsys install management upgrade_management_lts_v2018.4.1.10.tgz appliance-control-plane-1.15.[x].tgz
- For example, to upgrade the management subsystem from
2018.4.1.5-ifix1.0
to2018.4.1.12
:$ apicup subsys install management upgrade_management_lts_v2018.4.1.10.tgz appliance-control-plane-1.15.[x].tgz appliance-control-plane-1.14.[x].tgz
- For example, to upgrade the management subsystem from
- If unsure of which files are required for upgrade, running
apicup subsys install
interrogates the subsystem and returns an error if any required files are missing. For example, upgrading from2018.4.1.7
to2018.4.1.12
, but missing a control plane file:$ apicup subsys install management upgrade_management_lts_v2018.4.1.12.tgz Error: failed to install the subsystem: Unable to execute appliance plan: Missing minor version 15 in control plane upgrade path
The error indicates that you also need to provide a Control Plane artifact that is
minor
version15
. Note that the Control Plane artifact file naming convention ismajor.minor.patch
. Anypatch
version for the sameminor
version will work:appliance-control-plane-1.15.[x].tgz
- You must install the Control Plane file and the upgrade file for the subsystem at the same time.
You can install multiple Control Plane files with one
- The
apicup subsys install
command automatically runsapicup health-check
prior to attempting the upgrade. An error is displayed if a problem is found that will prevent successful upgrade.
-
For each subsystem, verify that the upgrade was successful.
- Use ssh to access the appliance and run
sudo apic status
. Verify that theUpgrade stage:
property has the valueUPGRADE_DONE
. Note that after the upgrade completes, it can take several minutes for all servers to start. If you see the error messageSubsystems not running
, wait a few minutes, try the command again, and review the output in theStatus
column of the Pods Summary section. For example, here is output from an upgrade of the Analytics subsystem.#sudo apic status INFO[0000] Log level: info Cluster members: - testsrv0233.subnet1.example.com (1.2.152.233) Type: BOOTSTRAP_MASTER Install stage: DONE Upgrade stage: UPGRADE_DONE Docker status: Systemd unit: running Kubernetes status: Systemd unit: running Kubelet version: testsrv0233 (4.4.0-138-generic) [Kubelet v1.10.6, Proxy v1.10.6] Etcd status: pod etcd-testsrv0233 in namespace kube-system has status Running Addons: calico, dns, helm, kube-proxy, metrics-server, nginx-ingress, Etcd cluster state: - etcd member name: testsrv0233.subnet1.example.com, member id: 12836860275847862867, cluster id: 14018872452420182423, leader id: 12836860275847862867, revision: 365042, version: 3.1.17 Pods Summary: NODE NAMESPACE NAME READY STATUS REASON default apic-analytics-analytics-client-76956644b9-cmgx8 0/0 Pending testsrv0233 default apic-analytics-analytics-client-76956644b9-vlqp2 1/1 Running testsrv0233 default apic-analytics-analytics-cronjobs-retention-1541381400-hp9fc 0/1 Succeeded testsrv0233 default apic-analytics-analytics-cronjobs-rollover-1541445300-c5n6z 0/1 Succeeded default apic-analytics-analytics-ingestion-547f875467-8mhsl 0/0 Pending testsrv0233 default apic-analytics-analytics-ingestion-547f875467-s7flj 1/1 Running default apic-analytics-analytics-mtls-gw-85b8676855-jmh8c 0/0 Pending testsrv0233 default apic-analytics-analytics-mtls-gw-85b8676855-sw6ps 1/1 Running testsrv0233 default apic-analytics-analytics-storage-basic-8cckh 1/1 Running testsrv0233 kube-system calico-node-8crtp 2/2 Running testsrv0233 kube-system coredns-87cb95869-6flvn 1/1 Running testsrv0233 kube-system coredns-87cb95869-rccvb 1/1 Running testsrv0233 kube-system etcd-testsrv0233 1/1 Running testsrv0233 kube-system ingress-nginx-ingress-controller-f7b9z 1/1 Running testsrv0233 kube-system ingress-nginx-ingress-default-backend-6f58fb5f56-nklmv 1/1 Running testsrv0233 kube-system kube-apiserver-testsrv0233 1/1 Running testsrv0233 kube-system kube-apiserver-proxy-testsrv0233 1/1 Running testsrv0233 kube-system kube-controller-manager-testsrv0233 1/1 Running testsrv0233 kube-system kube-proxy-2vw9b 1/1 Running testsrv0233 kube-system kube-scheduler-testsrv0233 1/1 Running testsrv0233 kube-system metrics-server-5558db4678-9drz6 1/1 Running testsrv0233 kube-system tiller-deploy-84f4c8bb78-vx65c 1/1 Running
- To check the status of a cluster on v2018.4.1.6 or later, see Checking cluster health on VMware.
- After completing an upgrade of the Portal subsystem, there may be a delay while the existing
sites are upgraded to the new platform version. Once all Portal pods have been upgraded, run the
following commands from your project directory. To see the progress of sites being upgraded to the
new platform version, use the following commands:
apicup subsys exec portal list-sites sites
Any sites currently upgrading will be listed as
UPGRADING
. Once all sites have finished upgrading they should have theINSTALLED
status and the new platform version listed.- Once all sites are in
INSTALLED
state and have the new platform listed, run:apicup subsys exec portal list-sites platforms
The new version of the platform should be the only platform listed.
Important: DO NOT reboot any of the virtual machines until ALL of the Portal sites are inINSTALLED
state and there is only one platform returned, even if you're instructed to do so by theapicup subsys health-check
during the upgrade process. - Use ssh to access the appliance and run
- When the upgrade completes successfully, ssh into the appliance. If a message indicates that a reboot is necessary, reboot the virtual machine to complete the operating system upgrades.
- Version 2018.4.1.9 iFix1.0 and later: After
completion of the upgrade, verify that all tasks are running.
Due to a known limitation in versions prior to v2018.4.1.9 iFix1.0, some tasks may have stopped running. Complete the following steps:
- Download the
apicops
utility from https://github.com/ibm-apiconnect/apicops/releases. - Run the following command to remove any pending
tasks:
$ apicops task-queue:fix-stuck-tasks
- Run the following command to verify that the returned list (task queue) is empty.
$ apicops task-queue:list-stuck-tasks
- Download the
- If you upgraded the Analytics subsystem from Version 2018.4.1.9 (or earlier) to Version
2018.4.1.10 (or later), and you enabled the Analytics message queue, you must clean and restart the
Analytics message queue and ingestion pods.
With root permissions, run the following script on a single Analytics OVA post upgrade.
#sudo su #!/bin/bash kubectl get pods --no-headers -o custom-columns=":metadata.name" | grep zookeeper | while read POD ; do echo "-------------Deleting zookeeper data for $POD" kubectl exec $POD -- rm -rf /var/lib/zookeeper/data kubectl exec $POD -- rm -rf /var/lib/zookeeper/log kubectl exec $POD -- rm /var/lib/zookeeper/zookeeper.entries kubectl exec $POD -- rm /var/lib/zookeeper/nodes.json done kubectl get pods --no-headers -o custom-columns=":metadata.name" | grep zookeeper | while read POD ; do echo "-------------Deleting pod" kubectl delete pod $POD --grace-period 0 --force & done sleep 120 kubectl get pods --no-headers -o custom-columns=":metadata.name" | grep kafka | while read POD ; do echo "-------------Deleting pod" kubectl delete pod $POD --grace-period 0 --force & done sleep 120 kubectl get pods --no-headers -o custom-columns=":metadata.name" | grep ingestion | while read POD ; do echo "-------------Deleting pod" kubectl delete pod $POD --grace-period 0 --force & done
When the script completes, verify that the zookeeper pods are running:
kubectl get pods
If any of the zookeeper pods are not running, run the script again.
What to do next
If you encounter problems with the cassandra or calico pods after completing the API Connect upgrade, see Troubleshooting the API Connect upgrade on VMware for suggested resolutions.
When you have successfully upgraded all of the API Connect subsystems, upgrade your DataPower Gateway Service. See Upgrading DataPower Gateway Service.