Upgrading from v10.0.1.2-ifix1 to v10.0.2.0
Upgrade your API Connect deployment on Kubernetes from v10.0.1.2-ifix1 or v10.0.2.0
Procedure
- Complete the prerequisites:
- Backup the current deployment. Wait until the backup completes before starting the upgrade.
- Do not start an upgrade if a backup is scheduled to run within a few hours.
- Do not perform maintenance tasks such as rotating key-certificates, restoring from a backup, or starting a new backup, at any time while the upgrade process is running.
- If you used any microservice image overrides in the management CR during a fresh install, you
must remove the image overrides prior to upgrade.Important: When upgrading to v10.0.2.0, if you used any microservice image overrides in the management CR during a fresh install then prior to upgrade these image overrides will be automatically removed by the operator during upgrade. You can apply them again after the upgrade is complete.
- Back up API test data in the Automated API behavior testing application as explained in Backing up API test data. After the upgrade is complete, you can migrate the data to V10.0.2.0 as explained in Restoring API test data.
- Download the API Connect files v10.0.2.0 distribution files from Fix
Central. You can find links to the latest API Connect Fix Pack files here: What's New in the latest
version.
You will download the Docker image-tool files for the v10.0.2.0 files of the API Connect subsystems, and also download the Kubernetes operators and API Connect Custom Resource (CR) templates. Next, you will upload the image-tool file to your Docker local registry. If necessary, you can populate a remote container registry with repositories. Then you can push the images from the local registry to the remote registry.
Table 1. Version 10.0.2.0 distribution files Fix Central Download file Description apiconnect-image-tool-10.0.2.0.tar.gz
Docker images for all API Connect subsystems for Version 10.0.2.0 apiconnect-operator-release-files_v10.0.2.0.zip
Kubernetes operators and API Connect Custom Resource (CR) templates for Version 10.0.2.0. toolkit-linux_lts_v10.0.2.0.zip toolkit-mac_lts_v10.0.2.0.zip toolkit-windows_lts_v10.0.2.0.zip
Optional toolkit command line utility for Version 10.0.2.0 Alternatively, you can install the toolkit from the Cloud Manager and API Manager UIs after installation is completed.
apic-lte-images-10.0.2.0.tar.gz
Optional test environment for Version 10.0.2.0 See Testing an API with the Local Test Environment signatures_v10.0.2.0.zip
IBM API Connect 10.0.2.0 Security Signature Bundle Files - Backup the API Connect subsystems.
The upgrade to v10.0.2.0 supports rollback if the upgrade encounters problems.
- Next, upload the v10.0.2.0 image files that you obtained from Fix Central in Step 2.
- Load the image-tool image for v10.0.2.0 into your Docker local
registry:
docker load < apiconnect-image-tool-10.0.2.0.tar.gz
Ensure that the registry has sufficient disk space for the files.
- If your Docker registry requires repositories to be created before images
can be pushed, create the repositories for each of the images listed by the image tool. If your
Docker registry does not require creation of repositories, skip this step and go to Step 4.c.
- Run the following command to get a list of the images from
image-tool:
docker run --rm apiconnect-image-tool-10.0.2.0 version --images
- From the output of each entry of the form
<image-name>:<image-tag>
, use your Docker registry repository creation command to create a repository for<image-name>
.For example in the case of AWS ECR the command would be for each<image-name>
:aws ecr create-repository --repository-name <image-name>
- Run the following command to get a list of the images from
image-tool:
- Upload the image:
- If you do not need to authenticate with the docker registry,
use:
docker run --rm apiconnect-image-tool-10.0.2.0 upload <registry-url>
- Otherwise, if your docker registry accepts authentication with username and password arguments,
use:
docker run --rm apiconnect-image-tool-10.0.2.0 upload <registry-url> --username <username> --password <password>
- Otherwise, such as with IBM Container Registry, if you need the image-tool to use your local
Docker credentials, first authenticate with your Docker registry, then upload images with the
command:
docker run --rm -v ~/.docker:/root/.docker --user 0 apiconnect-image-tool-10.0.2.0 upload <registry-url>
If necessary, review the following installation notes:
- If you do not need to authenticate with the docker registry,
use:
- Load the image-tool image for v10.0.2.0 into your Docker local
registry:
- Decompress
apiconnect-operator-release-files_v10.0.2.0.zip
. - Apply the new CRDs from the version you just extracted:
kubectl apply -f ibm-apiconnect-crds.yaml
- Apply the ingress-issuer:
kubectl apply -f helper_files/ingress-issuer-v1-alpha1.yaml -n <namespace>
This step is required during the upgrade because the file is updated for v10.0.2.0.
If you are upgrading a two data center disaster recovery deployment, apply the following files instead:On dc1:
kubectl apply -f helper_files/ingress-issuer-v1-alpha1-dc1.yaml -n <namespace>
On dc2:
kubectl apply -f helper_files/ingress-issuer-v1-alpha1-dc2.yaml -n <namespace>
- Apply the new DataPower Operator yaml into the namespace where the DataPower Operator is
running.
- If the operator is not running in the
default
namespace, open theibm-datapower.yaml
file in a text editor and find and replace all references todefault
the name of your namespace. You do not need to take this action when using Operator Lifecycle Manager (OLM). - Open
ibm-datapower.yaml
in a text editor. Locate theimage:
key in the containers section of the deployment yaml immediately afterimagePullSecrets:
. Replace the value of theimage:
key with the location of the datapower-operator image, either uploaded to your own registry or pulled from a public registry. kubectl apply -f ibm-datapower.yaml -n <namespace>
The Gateway CR goes to Pending state when the operator is updated. The state of the Gateway CR will change to Running after installation of the API Connect operator in the next step.
Note: There is a known issue on Kubernetes version 1.19.4 or higher that can cause the DataPower operator to fail to start. In this case, the DataPower Operator pods can fail to schedule, and will display the status message:no nodes match pod topology spread constraints (missing required label)
. For example:0/15 nodes are available: 12 node(s) didn't match pod topology spread constraints (missing required label), 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
You can workaround the issue by editing the DataPower operator deployment and re-applying it, as follows:
- Delete the DataPower operator deployment, if deployed
already:
kubectl delete -f ibm-datapower.yaml -n <namespace>
- Open
ibm-datapower.yaml
, and locate thetopologySpreadConstraints:
section. For example:topologySpreadConstraints: - maxSkew: 1 topologyKey: zone whenUnsatisfiable: DoNotSchedule
- Replace the values for
topologyKey:
andwhenUnsatisfiable:
with the corrected values shown in the example below:topologySpreadConstraints: - maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: ScheduleAnyway
- Save
ibm-datapower.yaml
and deploy the file to the cluster:kubectl apply -f ibm-datapower.yaml -n <namespace>
- Delete the DataPower operator deployment, if deployed
already:
- If the operator is not running in the
- Apply the new API Connect operator yaml into the namespace where the API
Connect operator is running.
- If the operator is not running in the
default
namespace, open theibm-apiconnect.yaml
file in a text editor and find and replace all references todefault
the name of your namespace. You do not need to take this action when using Operator Lifecycle Manager (OLM). - Open
ibm-apiconnect.yaml
in a text editor. Replace the value of eachimage:
key with the location of the apiconnect operator images (from the ibm-apiconnect container and the ibm-apiconnect-init container), either uploaded to your own registry or pulled from a public registry. kubectl apply -f ibm-apiconnect.yaml -n <namespace>
When the apiconnect operator deployment is updated, it detects that existing pods for all subsystems have labels that no longer match, and tries to fix labels. When fixing the labels, most of the microservices (for all subsystems) are recreated. All the subsystem CRs go intoPending
state and then intoRunning
state. Management subsystem microservices are recreated, with the exception of postgres/NATS components, at the end of the process. - If the operator is not running in the
- Verify that both operators are restarted.
- Prior to upgrading the operands (subsystems), ensure that the apiconnect operator
re-created the necessary microservices as part of the label updates in step 9:
kubectl get apic -n <my-cluster-name>
- Upgrade the operands (subsystems):
- Optional: For the optional components API Connect Toolkit and API Connect Local Test Environment, install the v10.0.2.0 version of each after you complete upgrade of the subsystems to v10.0.2.0.