IBM Software Hub online backup and restore to a different cluster with IBM Fusion
A Red Hat® OpenShift® Container Platform cluster administrator can create an online backup and restore it to a different cluster with IBM Fusion.
Before you begin
Do the following tasks before you back up and restore a IBM Software Hub deployment.
- Check whether the services that you are using support platform backup and
restore by reviewing Services that support backup and restore. You can also run the following
command:
cpd-cli oadp service-registry check \ --tenant-operator-namespace ${PROJECT_CPD_INST_OPERATORS} \ --verbose \ --log-level debugIf a service is not supported, check if one of the following alternatives is available:
- A service might have its own backup and restore process. In such cases, see the links that are provided in the Notes column in Services that support backup and restore.
- You might be able to migrate service data from one IBM Software Hub installation and to another IBM Software Hub installation with the data export and import utility. Review Services that support cpd-cli export-import.
- On the source cluster, install the software that is needed to back up and restore IBM Software Hub with IBM Fusion.
For more information, see Installing backup and restore software.
- Check that your IBM Software Hub deployment meets the following requirements:
- The minimum deployment profile of IBM Cloud Pak foundational services is
Small.For more information about sizing IBM Cloud Pak foundational services, see Hardware requirements and recommendations for foundational services.
- All services are installed at the same IBM Software Hub release.
You cannot back up and restore a deployment that is running service versions from different IBM Software Hub releases.
- The control plane is installed in a single project (namespace).
- The IBM Software Hub instance is installed in zero or more tethered projects.
- IBM Software Hub operators and the IBM Software Hub instance are in a good state.
- The minimum deployment profile of IBM Cloud Pak foundational services is
Overview
Backing up an IBM Software Hub deployment and restoring it to a different cluster involves the following high-level steps:
- Preparing to back up IBM Software Hub
- Creating an online backup
- Preparing to restore IBM Software Hub
- Restoring IBM Software Hub
- Completing post-restore tasks
1. Preparing to back up IBM Software Hub
Complete the following prerequisite tasks before you create an online backup. Some tasks are service-specific, and need to be done only when those services are installed.
1.1 Creating environment variables
Create the following environment variables so that you can copy commands from the documentation and run them without making any changes.
| Environment variable | Description |
|---|---|
OC_LOGIN |
Shortcut for the oc login command. |
CPDM_OC_LOGIN |
Shortcut for the cpd-cli manage login-to-ocp command. |
PROJECT_CPD_INST_OPERATORS |
The project where the IBM Software Hub instance operators are installed. |
PROJECT_CPD_INST_OPERANDS |
The project where IBM Software Hub control plane and services are installed. |
PROJECT_SCHEDULING_SERVICE |
The project where the scheduling service is installed. This environment variable is needed only when the scheduling service is installed. |
PROJECT_FUSION |
The project where IBM Fusion
is installed. Tip: The default project is
ibm-spectrum-fusion-ns.
|
OADP_PROJECT |
The project where the OADP
operator is installed. Note: For backup and restore with IBM Fusion, the project where the OADP operator is installed is
ibm-backup-restore.
|
1.2 Preparing IBM Fusion
Prepare IBM Fusion by setting up one of the clusters as the IBM Fusion backup and restore hub.
- In IBM Fusion, open the Services page and click the Backup & Restore tile.
- In the Install service window, select the storage
class (RWO) that you want to use to deploy the service.
The ibm-backup-restore project (namespace) is created on the cluster, and the service is installed in that project.
- Verify that the hub is in a healthy state by clicking and checking the Service status column of the hub.
1.3 Checking the contents of the IBM Fusion application for the IBM Software Hub operator
Check that the IBM Fusion application custom resource for the IBM Software Hub operator includes the following information:
- All projects (namespaces) that are members of the IBM Software Hub instance, including:
- The IBM Software Hub operators project
(
${PROJECT_CPD_INST_OPERATORS}). - The IBM Software Hub operands project
(
${PROJECT_CPD_INST_OPERANDS}). - All tethered projects, if they exist.
- The IBM Software Hub operators project
(
- The
PARENT_NAMESPACEvariable, which is set to${PROJECT_CPD_INST_OPERATORS}.
Do the following steps:
-
Log in to Red Hat OpenShift Container Platform as a cluster administrator.
${OC_LOGIN}Remember:OC_LOGINis an alias for theoc logincommand. - To get the list of all projects that are members of the IBM Software Hub instance, run the following
command:
oc get -n ${PROJECT_FUSION} applications.application.isf.ibm.com ${PROJECT_CPD_INST_OPERATORS} -o jsonpath={'.spec.includedNamespaces'} - To get the
PARENT_NAMESPACEvariable, run the following command:oc get -n ${PROJECT_FUSION} applications.application.isf.ibm.com ${PROJECT_CPD_INST_OPERATORS} -o jsonpath={'.spec.variables'}
1.4 Checking the version of cpdbr resources
You must install the correct version of cpdbr resources for the IBM Software Hub version that you are using. For example, if you upgraded IBM Software Hub from version 4.8.4 to 5.2.0, you must also upgrade the cpdbr service to version 5.2.0.
-
Log in to Red Hat OpenShift Container Platform as a cluster administrator.
${OC_LOGIN}Remember:OC_LOGINis an alias for theoc logincommand. - Check the version of cpdbr-oadp by running the following
command:
oc get po -l component=cpdbr-tenant,icpdsupport/app=br-service -n ${PROJECT_CPD_INST_OPERATORS} -o jsonpath='{.items[0].spec.containers[0].image}'Example output:icr.io/cpopen/cpd/cpdbr-oadp:5.2.2 - Check the version of the IBM Fusion backup and restore recipe for IBM Software Hub
by running the following
command:
oc get -n ${PROJECT_CPD_INST_OPERATORS} frcpe ibmcpd-tenant -o jsonpath={'.metadata.labels.icpdsupport/version'} - Ensure that the values returned by the preceding commands match.
1.5 Expanding PVCs that are smaller than 5Gi when using IBM Storage Scale Container Native storage
If your IBM Software Hub deployment is using IBM Storage Scale Container Native or IBM Fusion Global Data Platform storage, expand Persistent Volume Claims (PVCs) that are smaller than 5Gi to at least that amount to ensure that restoring a backup is successful. For details on expanding PVCs, see Volume Expansion in the IBM Storage Scale Container Storage Interface Driver documentation.
1.6 Checking the primary instance of every PostgreSQL cluster is in sync with its replicas
The replicas for Cloud Native PostgreSQL and EDB Postgres clusters occasionally get out of sync with the primary node. To check whether this problem exists and to fix the problem, see the troubleshooting topic PostgreSQL cluster replicas get out of sync.
1.7 Removing MongoDB-related ConfigMaps
oc delete cm zen-cs-aux-br-cmoc delete cm zen-cs-aux-ckpt-cmoc delete cm zen-cs-aux-qu-cmoc delete cm zen-cs2-aux-ckpt-cm1.8 Preparing IBM Knowledge Catalog
If large metadata enrichment jobs are running while an online backup operation is triggered, the Db2 pre-backup hooks might fail because the database cannot be put into a write-suspended state. It is recommended to have minimal enrichment workload while the online backup is scheduled.
1.9 Checking the status of installed services
Completed. Do the following steps:-
Log the
cpd-cliin to the Red Hat OpenShift Container Platform cluster:${CPDM_OC_LOGIN}Remember:CPDM_OC_LOGINis an alias for thecpd-cli manage login-to-ocpcommand. - Run the following command to get the status of all
services.
cpd-cli manage get-cr-status \ --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS}
2. Creating an online backup
Create and schedule online backups of a IBM Software Hub deployment with IBM Fusion by doing the following steps.
Check the Known issues and limitations for IBM Software Hub page for any workarounds that you might need to do before you create a backup.
- Create a backup object storage location for the backups.
- In IBM Fusion, go to and click Add location.
- Add details for the location and click Add.
- Create a backup policy for the IBM Software Hub
applications, and tie them to the backup storage location.
- Go to and click Add policy.
- Add details for the policy.
- Under Frequency, schedule when to run the backup policy.
- Select Object storage.Important: You cannot use in place snapshots in the backup policy when backing up IBM Software Hub.
- Click Create policy.
- If you are creating one policy per application, repeat these steps for each project (namespace)
with a backup and restore recipe.Recommendation: Create a single backup policy for all IBM Software Hub applications.
- Verify that the
ibmcpd-tenantIBM Fusion recipe version is the same as the IBM Software Hub version:oc get frcpe -n ${PROJECT_CPD_INST_OPERATORS} -l icpdsupport/generated-by-cpdbr=true,icpdsupport/version=$VERSIONIf there is no
recipe that matches the command, ensure it is configured by following the steps in 3.4 Installing the cpdbr service.ibmcpd-tenantibm - Create backup policy assignments.Note: If your IBM Software Hub deployment has tethered projects (namespaces), do not create a backup policy or assign a backup policy to those projects. The tethered projects are handled with the primary control plane project.
- Go to and click Protect apps.
- From the cluster menu, select the source cluster.
- In the Protect applications window, find and select the IBM Software Hub instance (tenant):
${PROJECT_CPD_INST_OPERATORS}application from the list and click Next. - Select a policy to assign to the application, and then set Back up now
toggle button to off.Notes:
Do not assign a policy to the
${PROJECT_CPD_INST_OPERANDS}application. This application is backed up with the recipe, described in the following step, that is used to back up the${PROJECT_CPD_INST_OPERATORS}application.Backup and restore recipe details are not yet associated with the policy assignment, so any backups that are taken now will be invalid.
- If the IBM Software Hub
scheduling service is installed, repeat these
steps to assign the
${PROJECT_SCHEDULING_SERVICE}policy to the scheduling service.
- Patch policy assignments with the backup and restore recipe details.
- Log in to Red Hat
OpenShift Container Platform as an instance
administrator.
${OC_LOGIN}Remember:OC_LOGINis an alias for theoc logincommand. - Get each policy assignment
name:
oc get policyassignment -n ${PROJECT_FUSION} - If installed, patch the
${PROJECT_SCHEDULING_SERVICE}policy assignment:oc -n ${PROJECT_FUSION} patch policyassignment <cpd-scheduler-policy-assignment> --type merge -p '{"spec":{"recipe":{"name":"ibmcpd-scheduler", "namespace":"'${PROJECT_SCHEDULING_SERVICE}'", "apiVersion":"spp-data-protection.isf.ibm.com/v1alpha1"}}}' - Patch the IBM Software Hub tenant policy
assignment:
oc -n ${PROJECT_FUSION} patch policyassignment <cpd-tenant-policy-assignment> --type merge -p '{"spec":{"recipe":{"name":"ibmcpd-tenant", "namespace":"'${PROJECT_CPD_INST_OPERATORS}'", "apiVersion":"spp-data-protection.isf.ibm.com/v1alpha1"}}}'
- Log in to Red Hat
OpenShift Container Platform as an instance
administrator.
- Verify that policy
assignments are associated to the correct IBM Software Hub backup and restore recipes by running the
command
again:
Check that the command returns output such as in the following example:oc get policyassignments.data-protection.isf.ibm.com -n ${PROJECT_FUSION}NAME PROVIDER APPLICATION BACKUPPOLICY RECIPE RECIPENAMESPACE PHASE LASTBACKUPTIMESTAMP CAPACITY cpd-operator-cpd-operator-apps isf-backup-restore cpd-operator cpd-operator ibmcpd-tenant cpd-operator Assigned 20h 59460607 cpd-scheduler-cpd-scheduler-apps isf-backup-restore cpd-scheduler cpd-scheduler ibmcpd-scheduler cpd-scheduler Assigned 20h 88206 - If the recipes are not associated to the correct policy assignments, do the following substeps.
- Unassign the policies from the applications.
- Verify that the cpdbr service is installed in the IBM Software Hub operator project.
- Verify that the recipe recipes.spp-data-protection.isf.ibm.com was installed in the IBM Software Hub operator project, and if installed, in the scheduling service project.
- Reassign the policies to the respective applications.
- Repeat the step to patch policy assignments.
- Go to .
- Select the
${PROJECT_CPD_INST_OPERATORS}application and click Back up now.When the
${PROJECT_CPD_INST_OPERATORS}application is backed up, the${PROJECT_CPD_INST_OPERANDS}project is backed up with it. - If installed, repeat the previous step to back up the
${PROJECT_SCHEDULING_SERVICE}application. - To monitor backup jobs, go to and click the Backups tab.
- Select a job to view the inventory of what resources will be backed up and the progress of the backup flow.
- If the backup
fails, return IBM Software Hub to a good state
before you retry a backup.
- Get the IBM Software Hub instance (tenant)
pod:
CPD_TENANT_POD=`oc get po -n ${PROJECT_CPD_INST_OPERATORS} -l component=cpdbr-tenant,icpdsupport/addOnId=cpdbr,icpdsupport/app=br-service | grep cpdbr-tenant-service | awk '{print $1}'` echo "cpd tenant pod: $CPD_TENANT_POD" - Run backup
post-hooks:
oc exec -it -n ${PROJECT_CPD_INST_OPERATORS} $CPD_TENANT_POD -- /cpdbr-scripts/cpdbr/checkpoint_backup_posthooks.sh --tenant-operator-namespace=${PROJECT_CPD_INST_OPERATORS}
- Get the IBM Software Hub instance (tenant)
pod:
3. Preparing to restore IBM Software Hub to a different cluster
Complete the following prerequisite tasks before you restore an online backup. Some tasks are service-specific, and need to be done only when those services are installed.
3.1 Preparing the target cluster
Prepare the target cluster that you want to use to restore IBM Software Hub.
- Make sure that the target cluster meets the following requirements:
- The target cluster has the same storage classes as the source cluster.
- For environments that use a private container registry, such as air-gapped environments, the target cluster has the same image content source policy as the source cluster. For details on configuring the image content source policy, see Configuring an image content source policy for IBM Software Hub images.
- The target cluster must be able to pull software images. For details, see Updating the global image pull secret for IBM Software Hub.
- The deployment
environment of the target cluster is the same as the source cluster.
- The target cluster uses the same hardware architecture as the source cluster. For example, x86-64.
- The target cluster is on the same OpenShift version as the source cluster.
- The target cluster allows for the same node configuration as the source cluster. For example, if the source cluster uses a custom KubeletConfig, the target cluster must allow the same custom KubeletConfig.
- Moving between IBM Cloud and non-IBM Cloud deployment environments is not supported.
- If you are using node
labels as the method for identifying nodes in the cluster, re-create the labels on the target
cluster.Best practice: Use node labels instead of node lists when you are restoring a IBM Software Hub deployment to a different cluster, especially if you plan to enforce node pinning. Node labels enable node pinning with minimal disruption. To learn more, see Passing node information to IBM Software Hub.
- Install one of the following IBM Fusion versions:
- Version 2.10.x with the latest hotfix
- In IBM Fusion on the hub (source) cluster, go to .
- Under 1. Connect your clusters (optional), click Use Fusion
UI and then click Copy snippet.
The hub connection snippet is copied.
- In IBM Fusion on the spoke (target) cluster, click Services, and in the Services page, click Backup & Restore Agent.
- In the Install Service page, paste the hub connection snippet that you previously copied from the hub cluster.
- Select the storage class (RWO) that you want to use and click
Install.
The ibm-backup-restore project (namespace) is created on the cluster, and the service is installed in that project.
- Install or upgrade cpdbr service role-based access controls (RBACs).Notes:
- Ensure that the same version of the cpdbr service is installed on the source and target clusters.
-
It is recommended that you install the latest version of the cpdbr service. If you previously installed the service, upgrade the service by doing the upgrade steps.
-
When the cpdbr service is installed on a target cluster, only the required permissions and cluster role bindings are created, because the IBM Software Hub projects (namespaces) aren't yet restored.
- Install the cpdbr service RBACs on the target cluster
-
- Log in to Red Hat
OpenShift Container Platform as a cluster
administrator.
${OC_LOGIN}Remember:OC_LOGINis an alias for theoc logincommand. - Install the cpdbr service.Note: Run the cpdbr installation command in the IBM Software Hub operators project even though the project does not yet exist in the target cluster. Do not manually create the project on the target cluster. The project is created during the IBM Software Hub restore process.
- Environments with the scheduling service
-
cpd-cli oadp install \ --component=cpdbr-tenant \ --tenant-operator-namespace=${PROJECT_CPD_INST_OPERATORS} \ --cpd-scheduler-namespace=${PROJECT_SCHEDULING_SERVICE} \ --namespace=${OADP_PROJECT} \ --log-level=debug \ --rbac-only=true \ --verbose - Environments without the scheduling service
-
cpd-cli oadp install \ --component=cpdbr-tenant \ --tenant-operator-namespace=${PROJECT_CPD_INST_OPERATORS} \ --namespace=${OADP_PROJECT} \ --log-level=debug \ --rbac-only=true \ --verbose
- Verify that the ClusterRole and ClusterRoleBinding were
created:
oc get clusterrole cpdbr-tenant-service-clusterroleoc get clusterrolebinding cpdbr-tenant-service-crbIf the cluster role bindings were created successfully, these commands return output like in the following examples:NAME CREATED AT cpdbr-tenant-service-clusterrole <timestamp>NAME ROLE AGE cpdbr-tenant-service-crb ClusterRole/cpdbr-tenant-service-clusterrole 45h
- Log in to Red Hat
OpenShift Container Platform as a cluster
administrator.
- Upgrade the cpdbr service on the target cluster
-
- Log in to Red Hat
OpenShift Container Platform as a cluster
administrator.
${OC_LOGIN}Remember:OC_LOGINis an alias for theoc logincommand. - Upgrade the cpdbr service.
- Environments with the scheduling service
-
cpd-cli oadp install \ --upgrade=true \ --component=cpdbr-tenant \ --tenant-operator-namespace=${PROJECT_CPD_INST_OPERATORS} \ --cpd-scheduler-namespace=${PROJECT_SCHEDULING_SERVICE} \ --namespace=${OADP_PROJECT} \ --rbac-only=true \ --log-level=debug \ --verbose - Environments without the scheduling service
-
cpd-cli oadp install \ --upgrade=true \ --component=cpdbr-tenant \ --tenant-operator-namespace=${PROJECT_CPD_INST_OPERATORS} \ --namespace=${OADP_PROJECT} \ --rbac-only=true \ --log-level=debug \ --verbose
- Log in to Red Hat
OpenShift Container Platform as a cluster
administrator.
- Verify the hub and spoke topology.
- In IBM Fusion on the hub cluster, go to .
- Verify that the hub and spoke are connected and in a healthy service state.
- Install
Certificate manager and the IBM License Service.
For details, see Installing shared cluster components for IBM Software Hub.
Note: You must install the same version of Certificate manager and the IBM License Service that is installed on the source cluster. - If your IBM Software Hub
deployment includes the following services, install and set up prerequisite software.
Instructions for installing prerequisite software are located in Installing prerequisite software.
Prerequisite software Service GPU operators An asterisk (*) indicates that the service requires GPU in some situations.
- IBM Knowledge Catalog Premium *
- IBM Knowledge Catalog Standard *
- Watson Machine Learning *
- Watson Studio Runtimes *
- watsonx.ai™
- watsonx Assistant *
- Watsonx BI
- watsonx Code Assistant™
- watsonx Code Assistant for Red Hat Ansible® Lightspeed
- watsonx Code Assistant for Z
- watsonx Code Assistant for Z Agentic 5.2.1 and later
- watsonx Code Assistant for Z Code Explanation
- watsonx Code Assistant for Z Code Generation 5.2.1 and later
- watsonx.data™ *
- watsonx.data Premium
- watsonx.data intelligence
- watsonx™ Orchestrate *
Red Hat OpenShift AI An asterisk (*) indicates that the service requires Red Hat OpenShift AI in some situations.
- IBM Knowledge Catalog Premium *
- IBM Knowledge Catalog Standard *
- watsonx.ai
- watsonx Assistant *
- Watsonx BI
- watsonx Code Assistant
- watsonx Code Assistant for Red Hat Ansible Lightspeed
- watsonx Code Assistant for Z
- watsonx Code Assistant for Z Agentic 5.2.1 and later
- watsonx Code Assistant for Z Code Explanation
- watsonx Code Assistant for Z Code Generation 5.2.1 and later
- watsonx.data Premium
- watsonx.data intelligence
- watsonx Orchestrate *
Multicloud Object Gateway - Watson Discovery
- Watson Speech services
- watsonx Assistant
- watsonx Orchestrate
Red Hat OpenShift Serverless Knative Eventing - watsonx Assistant
- watsonx Orchestrate
Warning: Do not create the secrets that the service needs to communicate with Multicloud Object Gateway. Instead, the secrets must be created as a post-restore task.
3.2 Cleaning up the target cluster after a previous restore
If you previously restored a IBM Software Hub backup or a previous restore attempt was unsuccessful, delete the IBM Software Hub instance projects (namespaces) in the target cluster before you try another restore.
Resources in the IBM Software Hub instance are watched and managed by operators and controllers that run in other projects. To prevent corruption or out of sync operators and resources when you delete a IBM Software Hub instance, Kubernetes resources that have finalizers specified in metadata must be located, and those finalizers must be deleted before you can delete the IBM Software Hub instance.
-
Log in to Red Hat OpenShift Container Platform as an instance administrator.
${OC_LOGIN}Remember:OC_LOGINis an alias for theoc logincommand. - Download the cpd-pre-restore-cleanup.sh script from https://github.com/IBM/cpd-cli/tree/master/cpdops/5.2.0.
- If the tenant operator project exists and has the
common-service
NamespaceScopecustom resource that identifies all the tenant projects, run the following command:./cpd-pre-restore-cleanup.sh --tenant-operator-namespace="${PROJECT_CPD_INST_OPERATORS}" - If the tenant operator project does not exist or specific IBM Software Hub projects need to be deleted, run the
following command.
If the common-service
NamespaceScopecustom resource is not available and additional projects, such as tethered projects, need to be deleted, modify the list of comma-separated projects in the--additional-namespacesoption as necessary../cpd-pre-restore-cleanup.sh --additional-namespaces="${PROJECT_CPD_INST_OPERATORS},${PROJECT_CPD_INST_OPERANDS}" - If the IBM Software Hub
scheduling service was installed, uninstall
it.
For details, see Uninstalling the scheduling service.
3.3 Validating cpdbr service permissions
After the cpdbr service is installed on the target cluster, check that the proper permissions that the service needs exist.
- Check roles and rolebindings that are needed for the IBM Software Hub
restore:
oc get clusterrole cpdbr-tenant-service-clusterroleoc get clusterrolebinding cpdbr-tenant-service-crb - Check that the IBM Software Hub operators
service account is associated with the
cpdbr-tenant-service-rolerole in the kube-public project (namespace):oc describe -n kube-public rolebinding cpdbr-tenant-service-rb | grep ${PROJECT_CPD_INST_OPERATORS}Example output:Name: cpdbr-tenant-service-rb Labels: component=cpdbr-tenant icpdsupport/addOnId=cpdbr icpdsupport/app=br-service icpdsupport/cpdbr=true Annotations: <none> Role: Kind: Role Name: cpdbr-tenant-service-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount cpdbr-tenant-service-sa cpd-op ServiceAccount cpdbr-tenant-service-sa group1-op ServiceAccount cpdbr-tenant-service-sa cpd-op-grp2
4. Restoring IBM Software Hub to a different cluster
Restore an online backup to a different cluster by doing the following steps.
-
You cannot restore a backup to a different project of the IBM Software Hub instance.
- During the restore process, IBM Software Hub does not support any software or configurations that may cause pods to be rescheduled or moved to different nodes. To ensure a successful restore, maintain pod placement stability throughout the process.
-
If service-related custom resources are manually placed into maintenance mode prior to creating an online backup, those custom resources will remain in the same state if the backup is restored. Taking these services out of maintenance mode must be done manually after the restore.
Check the Known issues and limitations for IBM Software Hub page for any workarounds that you might need to do before you restore a backup.
- In IBM Fusion on the hub cluster, expand Backup & restore and click Backed up applications.
- If installed, restore the
${PROJECT_SCHEDULING_SERVICE}application by selecting it, clicking
and then clicking
Restore.Otherwise, go to step 8.
- In the Restore page, select Choose a different cluster to restore the application in., select the target cluster, and click Next.
- In the next restore page, select a backup to restore and click Next.
- In the next restore page, do not change any default options and click
Restore.Important: Make sure that the Include missing etcd resources checkbox is selected.
- In the Confirm restore dialog, click Restore.
- Wait for the application restore to complete and its resources to be fully running before starting to restore the next application.
- Repeat steps 2-7 to restore the IBM Software Hub instance (tenant) application
(
${PROJECT_CPD_INST_OPERATORS}).When the
${PROJECT_CPD_INST_OPERATORS}application is restored, the IBM Software Hub instance project (${PROJECT_CPD_INST_OPERANDS}) is restored at the same time.Note: If IBM Software Hub operator install plans in the source cluster are set to the manual approval strategy (installPlanApproval: Manual), you must approve the install plans as the operators are restored. Otherwise, the restore process will not progress. For more information about operator install plans, see Install plan. - To monitor restore jobs, go to and click the Restores tab.
- Select a job to view the inventory of what resources will be restored and the progress of the restore flow.
- Check that all service instances are restored and in good status.
When the restore jobs are completed, all resources are restored. However, the platform and services might need more time for the services to reconcile and start up before you can start using them.
- Log the
cpd-cliin to the Red Hat OpenShift Container Platform cluster:${CPDM_OC_LOGIN}Remember:CPDM_OC_LOGINis an alias for thecpd-cli manage login-to-ocpcommand. - Get the status of all
services.
cpd-cli manage get-cr-status \ --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} - Verify that all services show
Completed.
- Log the
5. Completing post-restore tasks
Complete additional tasks for the control plane and for some services after you restore a IBM Software Hub deployment.
5.1 Passing node information and applying cluster HTTP proxy settings or other RSI patches to thecontrol plane
If you use node lists to pin pods to nodes, you must re-run the
cpd-cli manage apply-entitlement command after you restore IBM Software Hub on the target cluster. Any pods that need
to be rescheduled will be unavailable while they are moved to different nodes. For more information,
see Passing node information to IBM Software Hub.
cpd-cli manage apply-rsi-patches --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} -vvv5.2 Patching Cognos Analytics instances
- Patch the content store and audit database ports in the Cognos Analytics service instance by running the following
script:
#!/usr/bin/env bash #----------------------------------------------------------------------------- #Licensed Materials - Property of IBM #IBM Cognos Products: ca #(C) Copyright IBM Corp. 2024 #US Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP Schedule #----------------------------------------------------------------------------- set -e #set -x function usage { echo $0: usage: $0 [-h] -t tethered_namespace -a audit_db_port_number -c cs_db_port_number [-v] } function help { usage echo "-h prints help to the console" echo "-t tethered namespace (required)" echo "-a Audit DB port number" echo "-c CS DB port number" echo "-v turn on verbose mode" echo "" exit 0 } while getopts ":ht:a:c:v" opt; do case ${opt} in h) help ;; t) tethered_namespace=$OPTARG ;; a) audit_db_port_number=$OPTARG ;; c) cs_db_port_number=$OPTARG ;; v) verbose_flag="true" ;; ?) usage exit 0 ;; esac done if [[ -z ${tethered_namespace} ]]; then echo "A tethered namespace must be provided" help fi echo "Get CAServiceInstance Name" cr_name=$(oc -n ${tethered_namespace} get caserviceinstance --no-headers -o custom-columns=NAME:.metadata.name) if [[ -z ${cr_name} ]]; then echo "Unable to find CAServiceInstance CR for namespace: ${tethered_namespace}" help fi if [[ ! -z ${cs_db_port_number} ]]; then echo "Updating CS Database Port Number in the Custom Resource ${cr_name}..." oc patch caserviceinstance ${cr_name} --type merge -p "{\"spec\":{\"cs\":{\"database_port\":\"${cs_db_port_number}\"}}}" -n ${tethered_namespace} fi if [[ ! -z ${audit_db_port_number} ]]; then echo "Updating Audit Database Port Number in the Custom Resource ${cr_name}..." oc patch caserviceinstance ${cr_name} --type merge -p "{\"spec\":{\"audit\":{\"database_port\":\"${audit_db_port_number}\" }}}" -n ${tethered_namespace} fi sleep 20 check_status="Completed" - Check the status of the Cognos Analytics reconcile
action:
for i in {1..240};do caStatus=$(oc get caserviceinstance ${cr_name} -o jsonpath="{.status.caStatus}" -n ${tethered_namespace}) if [[ ${caStatus} == ${check_status} ]];then echo "ca ${check_status} Successfully" break elif [[ ${caStatus} == "Failed" ]];then echo "ca ${caStatus}!" exit 1 fi echo "ca Status: ${caStatus}" sleep 30 done
5.3 Patching Watson OpenScale database
If a Db2 or Db2 Warehouse database in the cluster is used as data mart database in Watson OpenScale, the port must be updated. Because the Db2 database port might be different after restoring, update the port in the Watson OpenScale instance database to the correct value. You can update the port through the Watson OpenScale user interface or API.
5.4 Resetting status of ongoing Watson OpenScale model evaluations
When a Watson OpenScale instance is restored, some of its features, such as scheduled or on-demand model evaluations, might not function properly. You must reset the status of ongoing model evaluations. For details, see Resetting status of ongoing Watson OpenScale model evaluations.
5.5 Restarting IBM Knowledge Catalog metadata import jobs
Running, even though the actual import job isn't running. The job must be canceled
and manually restarted. You can cancel and restart a job in IBM Knowledge Catalog or by using an API call.- Cancel and restart a job in IBM Knowledge Catalog
-
- Go to a Jobs page, either the general one or the one for the project that contains the metadata import asset.
- Look for the job and cancel it.
- Restart the job.
- Cancel and restart a job by using an API call
-
Note: You must have the Admin role to use this API call.
post /v2/metadata_imports/recover_taskThe request payload must look like the following example. For
recovery_date, specify the date when IBM Knowledge Catalog was restored from the backup image. Any jobs that were started before the specified date are restarted automatically.{ "recovery_date": "2022-05-05T01:00:00Z", "pending_type": "running" }
5.6 Restarting IBM Knowledge Catalog metadata enrichment jobs
After IBM Software Hub is restored, running metadata enrichment jobs might not complete successfully. Such jobs must be manually restarted.
- In IBM Knowledge Catalog, open the project that contains the metadata enrichment asset.
- Select the asset.
- Click the
button of the asset and then click
Enrich to start a new enrichment job.
5.7 Rerunning IBM Knowledge Catalog lineage data import jobs
If a lineage data import job is running at the same time that an online
backup is taken, the job is in a Complete state when the backup is restored.
However, users cannot see lineage data in the catalog. Rerun the lineage import job.
5.8 Restarting IBM Knowledge Catalog lineage pods
- wkc-data-lineage-service-xxx
- wdp-kg-ingestion-service-xxx
- Log in to Red Hat
OpenShift Container Platform as a cluster
administrator:
${OC_LOGIN}Remember:OC_LOGINis an alias for theoc logincommand. - Restart the wkc-data-lineage-service-xxx
pod:
oc delete -n ${PROJECT_CPD_INST_OPERANDS} "$(oc get pods -o name -n ${PROJECT_CPD_INST_OPERANDS} | grep wkc-data-lineage-service)" - Restart the wdp-kg-ingestion-service-xxx
pod:
oc delete -n ${PROJECT_CPD_INST_OPERANDS} "$(oc get pods -o name -n ${PROJECT_CPD_INST_OPERANDS} | grep wdp-kg-ingestion-service)"
5.9 Retraining existing watsonx Assistant skills and creating secrets to connect to Multicloud Object Gateway
After restoring the watsonx Assistant backup, it is necessary to retrain the existing skills. This involves modifying a skill, to trigger training. The training process for a skill typically requires less than 10 minutes to complete. For more information, see the Retraining your backend model section in the IBM Cloud documentation.
- Log the
cpd-cliin to the Red Hat OpenShift Container Platform cluster:${CPDM_OC_LOGIN}Remember:CPDM_OC_LOGINis an alias for thecpd-cli manage login-to-ocpcommand. - Get the names of the secrets that contain the NooBaa account credentials and
certificate:
oc get secrets --namespace=openshift-storage - Set the following environment variables based on the names of the secrets on your cluster.
- Set
NOOBAA_ACCOUNT_CREDENTIALS_SECRETto the name of the secret that contains the NooBaa account credentials. The default name isnoobaa-admin.If you created multiple backing stores, ensure that you specify the credentials for the appropriate backing store.
export NOOBAA_ACCOUNT_CREDENTIALS_SECRET=<secret-name> - Set
NOOBAA_ACCOUNT_CERTIFICATE_SECRETto the name of the secret that contains the NooBaa account certificate. The default name isnoobaa-s3-serving-cert.export NOOBAA_ACCOUNT_CERTIFICATE_SECRET=<secret-name>
- Set
- Create the secrets that watsonx Assistant uses
to connect to Multicloud Object Gateway.
- Run the
setup-mcgcommand to create the secrets:cpd-cli manage setup-mcg \ --components=watson_assistant \ --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \ --noobaa_account_secret=${NOOBAA_ACCOUNT_CREDENTIALS_SECRET} \ --noobaa_cert_secret=${NOOBAA_ACCOUNT_CERTIFICATE_SECRET} - Wait for the
cpd-clito return the following message before proceeding to the next step:[SUCCESS] ... setup-mcg completed successfully. - Confirm that the secrets were created in the operands project for the
instance:
oc get secrets --namespace=${PROJECT_CPD_INST_OPERANDS} \noobaa-account-watson-assistant\noobaa-cert-watson-assistant\noobaa-uri-watson-assistant - If the command returns Error from server (NotFound), re-run the
setup-mcgcommand.
- Run the
- If present, delete the following resources that connect to Multicloud Object Gateway.Tip: After these resources are deleted, they are recreated with the updated object store secrets.
- Set the instance name environment variable to the name that you want to use for the service
instance.
export INSTANCE=<Watson_Assistant_Instance_Name> - If they are present, delete the following
resources:
oc delete job $INSTANCE-create-bucket-store-cos-joboc delete secret registry-$INSTANCE-clu-training-$INSTANCE-dwf-trainingoc delete job $INSTANCE-clu-training-update
- Set the instance name environment variable to the name that you want to use for the service
instance.
5.10 Creating secrets to connect Watson Discovery to Multicloud Object Gateway
- Log the
cpd-cliin to the Red Hat OpenShift Container Platform cluster:${CPDM_OC_LOGIN}Remember:CPDM_OC_LOGINis an alias for thecpd-cli manage login-to-ocpcommand. - Get the names of the secrets that contain the NooBaa account credentials and
certificate:
oc get secrets --namespace=openshift-storage - Set the following environment variables based on the names of the secrets on your cluster.
- Set
NOOBAA_ACCOUNT_CREDENTIALS_SECRETto the name of the secret that contains the NooBaa account credentials. The default name isnoobaa-admin.If you created multiple backing stores, ensure that you specify the credentials for the appropriate backing store.
export NOOBAA_ACCOUNT_CREDENTIALS_SECRET=<secret-name> - Set
NOOBAA_ACCOUNT_CERTIFICATE_SECRETto the name of the secret that contains the NooBaa account certificate. The default name isnoobaa-s3-serving-cert.export NOOBAA_ACCOUNT_CERTIFICATE_SECRET=<secret-name>
- Set
- Create the secrets that Watson Discovery uses
to connect to Multicloud Object Gateway.
- Run the
setup-mcgcommand to create the secrets:cpd-cli manage setup-mcg \ --components=watson_discovery \ --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \ --noobaa_account_secret=${NOOBAA_ACCOUNT_CREDENTIALS_SECRET} --noobaa_cert_secret=${NOOBAA_ACCOUNT_CERTIFICATE_SECRET} - Wait for the
cpd-clito return the following message before proceeding to the next step:[SUCCESS] ... setup-mcg completed successfully. - Confirm that the secrets were created in the operands project for the
instance:
oc get secrets --namespace=${PROJECT_CPD_INST_OPERANDS} \noobaa-account-watson-discovery - If the command returns Error from server (NotFound), re-run the
setup-mcgcommand.
- Run the
5.11 Creating secrets to connect watsonx Orchestrate to Multicloud Object Gateway
- Log the
cpd-cliin to the Red Hat OpenShift Container Platform cluster:${CPDM_OC_LOGIN}Remember:CPDM_OC_LOGINis an alias for thecpd-cli manage login-to-ocpcommand. - Get the names of the secrets that contain the NooBaa account credentials and
certificate:
oc get secrets --namespace=openshift-storage - Set the following environment variables based on the names of the secrets on your cluster.
- Set
NOOBAA_ACCOUNT_CREDENTIALS_SECRETto the name of the secret that contains the NooBaa account credentials. The default name isnoobaa-admin.If you created multiple backing stores, ensure that you specify the credentials for the appropriate backing store.
export NOOBAA_ACCOUNT_CREDENTIALS_SECRET=<secret-name> - Set
NOOBAA_ACCOUNT_CERTIFICATE_SECRETto the name of the secret that contains the NooBaa account certificate. The default name isnoobaa-s3-serving-cert.export NOOBAA_ACCOUNT_CERTIFICATE_SECRET=<secret-name>
- Set
- Create the secrets that watsonx Orchestrate uses
to connect to Multicloud Object Gateway.
- Run the
setup-mcgcommand to create the secrets:cpd-cli manage setup-mcg \ --components=watsonx_orchestrate \ --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \ --noobaa_account_secret=${NOOBAA_ACCOUNT_CREDENTIALS_SECRET} --noobaa_cert_secret=${NOOBAA_ACCOUNT_CERTIFICATE_SECRET} - Wait for the
cpd-clito return the following message before proceeding to the next step:[SUCCESS] ... setup-mcg completed successfully. - Confirm that the secrets were created in the operands project for the
instance:
oc get secrets --namespace=${PROJECT_CPD_INST_OPERANDS} \noobaa-account-watson-orchestrate - If the command returns Error from server (NotFound), re-run the
setup-mcgcommand.
- Run the
- If you haven't already done so, create the secrets to connect watsonx Assistant to Multicloud Object Gateway.
5.12 Creating secrets to connect Watson Speech services to Multicloud Object Gateway
Some Watson Speech services
pods might be in an Error state because they cannot connect to Multicloud Object Gateway.
- Log the
cpd-cliin to the Red Hat OpenShift Container Platform cluster:${CPDM_OC_LOGIN}Remember:CPDM_OC_LOGINis an alias for thecpd-cli manage login-to-ocpcommand. - Get the names of the secrets that contain the NooBaa account credentials and
certificate:
oc get secrets --namespace=openshift-storage - Set the following environment variables based on the names of the secrets on your cluster.
- Set
NOOBAA_ACCOUNT_CREDENTIALS_SECRETto the name of the secret that contains the NooBaa account credentials. The default name isnoobaa-admin.If you created multiple backing stores, ensure that you specify the credentials for the appropriate backing store.
export NOOBAA_ACCOUNT_CREDENTIALS_SECRET=<secret-name> - Set
NOOBAA_ACCOUNT_CERTIFICATE_SECRETto the name of the secret that contains the NooBaa account certificate. The default name isnoobaa-s3-serving-cert.export NOOBAA_ACCOUNT_CERTIFICATE_SECRET=<secret-name>
- Set
- Create the secrets that the Watson Speech services
use to connect to Multicloud Object Gateway.
- Run the
setup-mcgcommand to create the secrets:cpd-cli manage setup-mcg \ --components=watson_speech \ --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS} \ --noobaa_account_secret=${NOOBAA_ACCOUNT_CREDENTIALS_SECRET} \ --noobaa_cert_secret=${NOOBAA_ACCOUNT_CERTIFICATE_SECRET} - Wait for the
cpd-clito return the following message before proceeding to the next step:[SUCCESS] ... setup-mcg completed successfully. - Confirm that the secrets were created in the operands project for the
instance:
oc get secrets --namespace=${PROJECT_CPD_INST_OPERANDS} \noobaa-account-watson-speech - If the command returns Error from server (NotFound), re-run the
setup-mcgcommand.
- Run the
oc get po -l 'app.kubernetes.io/component in (stt-models, tts-voices)' -n ${PROJECT_CPD_INST_OPERANDS} | grep ${CUSTOM_RESOURCE_SPEECH}5.13 Installing the privileged monitoring service
If the privileged monitoring service was installed in the source cluster, install the service in the target cluster. For details, see Installing privileged monitors.
5.14 Restoring services that do not support online backup and restore
- Data Gate
- Data Gate synchronizes Db2 for z/OS data in real time. After IBM Software Hub is restored, data might be out of sync from Db2 for z/OS. It is recommended that you re-add tables after IBM Software Hub foundational services are restored.
- MANTA Automated Data Lineage
- The service is functional and data can be re-imported. For information about importing data, see Managing existing metadata imports (IBM Knowledge Catalog).
- MongoDB
- The service must be deleted and reinstalled. Recreate the instance as a new instance, and then restore the data with MongoDB tools. For more information, see Installing the MongoDB service and Back Up and Restore with MongoDB Tools.