Backing up and restoring IBM Cloud Pak for Integration

You can backup and restore some parts of your Cloud Pak for Integration installation by using Red Hat OpenShift API for Data Protection (OADP). OADP is a tool, based on the Velero project, for backing up and restoring Kubernetes cluster resources and persistent volumes, which you might want to do as part of disaster recovery preparation. For more information about OADP, see Introduction to OpenShift API for Data Protection in the Red Hat OpenShift documentation.

The backup process saves a copy of the configuration (and some of the data) for some of the instances in Cloud Pak for Integration. Currently, the instance types that you can back up by using OADP are as follows:

Operator Instance type
IBM Cloud Pak® for Integration
  • Platform UI
  • Declarative API
  • Declarative API Product
  • Integration assembly
  • Messaging server
  • Messaging queue
  • Messaging channel
  • Messaging user
IBM Automation Foundation assets
  • Automation assets
IBM API Connect
  • API management
  • API Manager
  • API Analytics
  • API Gateway
  • API Portal
IBM App Connect
  • Configuration
  • Integration dashboard
  • Integration design
  • Integration runtime
  • Integration server
  • Switch server
IBM MQ
  • Queue manager
IBM DataPower Gateway
  • Enterprise gateways
IBM Event Streams
  • Kafka cluster
  • Kafka connect
  • Kafka topic
  • Kafka user
  • Kafka bridge
  • Kafka connector
  • Kafka reblanace
IBM Cloud Pak foundational services
  • Keycloak

For instance types that are not yet supported by OADP, you can recover data by using automation techniques. For more information about automation techniques and disaster recovery strategies, see Disaster recovery.

When you use OADP, you can specify which namespaces are backed up, and back up multiple namespaces at the same time. You can also specify which resources within a namespace are backed up, by using labels. You can restore a backup into the same cluster (in-place recovery) or into a new cluster.

Important: There some limitations on what OADP can achieve:
  • Transient data (such as in-flight events and messages, and MQ configuration that is not specified declaratively) is not backed up.
  • OADP backup of API Connect two data center disaster recovery deployments is not supported.

Before you begin

  • You must be a cluster administrator to create or restore a backup. For more information, see OpenShift Roles and permissions.

  • Set up secure storage to contain the backups. For a list of storage types that OADP supports, see About installing OADP in the OpenShift Container Platform documentation.

  • OADP is not installed by default in an OpenShift Container Platform installation. If your cluster does not already contain the OADP operator, install it by following the instructions in Installing the OADP Operator in the OpenShift Container Platform documentation (select the latest stable-1.x update channel and accept the default namespace, which is openshift-adp).

  • If you intend to restore the files that are backed up in one cluster (the backup cluster) into another cluster (the restore cluster), do the following:

    • Ensure that the restore cluster has the same hostname, OADP configuration, and storage classes (with the same names that are used by the instances that you back up) as the backup cluster. Also ensure that the restore cluster has access to the storage location that contains the backups.

    • Install the OADP operator on the restore cluster and configure OADP in the same way that you did when you installed it on the backup cluster.

    • If you are using a certificate manager on the backup cluster, install a certificate manager on the restore cluster, as well.

  • Create a storage bucket and generate credentials in your object storage location. For example, on IBM Cloud you will need to create a service ID with the Include HMAC Credential option selected.

Configuring OADP

To configure OADP, you need to create a Secret and a DataProtectionApplication custom resource.

For more detailed information and examples, follow the instructions in the "Installing and configuring OADP" section of the OpenShift Container Platform documentation that are appropriate for your s3 storage type. For example, to configure OADP with AWS or IBM Cloud, follow the instructions in Configuring the OpenShift API for Data Protection with Amazon Web Services.

  1. Create a credentials-velero file. The following example is for AWS and IBM Cloud. Your file will differ if you are using a different s3 storage type:

    cat << EOF > ./credentials-velero
    [default]
    aws_access_key_id=<ACCESS_KEY_ID>
    aws_secret_access_key=<SECRET_ACCESS_KEY>
    EOF
  2. Use your credentials-velero file to create a Secret object with the default name of cloud-credentials:

    oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero
  3. Copy the following YAML code to create a draft copy for modification:

    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    metadata:
      name: oadp-application
      namespace: openshift-adp
    spec:
      backupLocations:
        - velero:
            config:
              insecureSkipTLSVerify: 'true'
              region: <region>
              s3ForcePathStyle: 'true'
              s3Url: <s3Url>
            credential:
              key: cloud
              name: cloud-credentials
            default: true
            objectStorage:
              bucket: <bucketName>
              prefix: integration
            provider: aws
      configuration:
        restic:
          enable: true
        velero:
          customPlugins:
            - image: cp.icr.io/cp/appc/acecc-velero-plugin-prod:12.0.12-r1-20240604-122405@sha256:40394ae7f0c2a96cf53d24be535ce9e234459cb4b4e9f70609b64c9cfbd15357
              name: app-connect
            - image: cp.icr.io/cp/icp4i/ar-velero-plugin:1.7.1-2024-07-02-1421-0acb2152@sha256:c9298efc0646380aa1439d34abe4071de047e4e84adb6c89e18830c91fc79e06
              name: integration
            - image: cp.icr.io/cp/apic/ibm-apiconnect-apiconnect-velero-plugin@sha256:f6b53dc4e6d0559f3c053591e9a07b5aba16739dfc991a9936f3857802a0d115
              name: apiconnect 
          defaultPlugins:
            - openshift
            - aws
          logLevel: debug
  4. Replace the following values in the YAML code:

    • <s3-url>: the URL of your s3 storage. For example, https://s3.us-south.cloud-object-storage.appdomain.cloud

    • <bucket-name>: the name of the bucket you want to backup to in your s3 storage

    • <region>: the region where your s3 storage bucket is located. For example, us-south.

  5. Use your updated YAML code to create the DataProtectionApplication custom resource in your cluster. When the DataProtectionApplication custom resource is ready, the status of the BackupStorageLocation custom resource in the BackupStorageLocations tab of the OADP operator is also ready. You can also view the status by running the following command:

    oc get BackupStorageLocations -n openshift-adp

Creating a backup

Label the instances to back up

The backup process backs up any resource that has an appropriate backup label. You must therefore ensure that every resource that you want to backup has such a label.

The following steps show how to add labels to the instances in Cloud Pak for Integration by using the Red Hat OpenShift CLI. If you want to do more advanced actions such as adding your own labels or labeling only some instances, see the More information about labels section later in this topic.

To add labels by using commands, complete the following steps:

  1. Ensure that you have the Red Hat OpenShift command-line interface (CLI) installed, as described in Getting started with the OpenShift CLI.

  2. Log in to your OpenShift Container Platform system as a cluster administrator by running the oc login command.

  3. If you installed Cloud Pak for Integration into a single namespace, run the following command to make that namespace the default namespace. If you installed Cloud Pak for Integration into all namespaces, you can skip this step; later commands will run against all namespaces.

    oc project <cloud-pak-for-integration-namespace>

    For <cloud-pak-for-integration-namespace>, enter the namespace into which you installed the operators.

  4. Run the following oc label commands to add labels to the operators. For more information about the oc label command, see OpenShift CLI developer command reference in the Red Hat OpenShift documentation, or run the oc label --help command.

    • Add labels to the catalog sources, which make the operators available for you to install:

      oc label catalogsource ibm-integration-platform-navigator-catalog backup.integration.ibm.com/component=catalogsource -n openshift-marketplace
      oc label catalogsource ibm-integration-asset-repository-catalog backup.integration.ibm.com/component=catalogsource -n openshift-marketplace
      oc label catalogsource appconnect-operator-catalogsource backup.appconnect.ibm.com/component=catalogsource -n openshift-marketplace
      oc label catalogsource ibm-datapower-operator-catalog backup.datapower.ibm.com/component=catalogsource -n openshift-marketplace
      oc label catalogsource ibm-eventstreams backup.eventstreams.ibm.com/component=catalogsource -n openshift-marketplace
      oc label catalogsource opencloud-operators backup.integration.ibm.com/component=catalogsource -n openshift-marketplace
      oc label catalogsource ibm-apiconnect-catalog backup.apiconnect.ibm.com/component=catalogsource -n openshift-marketplace
      oc label catalogsource ibmmq-operator-catalogsource backup.mq.ibm.com/component=catalogsource -n openshift-marketplace

      The catalog source names in this example list are the default names when the catalog sources are created by using the oc ibm-pak tool. If you create your catalog sources by using a different method (such as installation by using the CLI or a CICD pipeline), you may need to enter different catalog source names.

    • Add labels to the operator subscriptions that are part of Cloud Pak for Integration, with the following exceptions:

      • Do not add labels for operators that were installed automatically, such as EDB. (You only need to add labels for operators that you installed.)

      • Do not add labels for the IBM DataPower Gateway operator if it was installed as a dependency of the IBM API Connect operator. If you do back up the subscription, the operator will not be installed when it is restored.

      oc label subscription ibm-integration-platform-navigator backup.integration.ibm.com/component=subscription
      oc label subscription ibm-integration-asset-repository backup.integration.ibm.com/component=subscription
      oc label subscription ibm-appconnect backup.appconnect.ibm.com/component=subscription
      oc label subscription datapower-operator backup.datapower.ibm.com/component=subscription
      oc label subscription ibm-eventstreams backup.eventstreams.ibm.com/component=subscription
      oc label subscription ibm-common-service-operator backup.integration.ibm.com/component=subscription
      oc label subscription ibm-apiconnect backup.apiconnect.ibm.com/component=subscription
      oc label subscription ibm-mq backup.mq.ibm.com/component=subscription

      The subscription names in this example list are the default names when the operator is installed using the OpenShift web console. If you create your subscriptions using a different method (such as installation by using the CLI or a CICD pipeline), you may need to enter different subscription names.

    • Add labels to the operator groups. This step is required only if you installed the operators in A specific namespace on the cluster mode. If you installed the operators in All namespaces on the cluster mode, you do not need to backup the operator groups.

      oc label operatorgroup --all backup.integration.ibm.com/component=operatorgroup
      oc label operatorgroup --all backup.appconnect.ibm.com/component=operatorgroup
      oc label operatorgroup --all backup.eventstreams.ibm.com/component=operatorgroup
      oc label operatorgroup --all backup.datapower.ibm.com/component=operatorgroup
      oc label operatorgroup --all backup.apiconnect.ibm.com/component=subscription
      oc label operatorgroup --all backup.mq.ibm.com/component=subscription
  5. Run the following commands to add labels to all the Cloud Pak for Integration instances on the cluster that support OADP backup and restore:

    oc label platformnavigator --all --all-namespaces backup.integration.ibm.com/component=platformnavigator  
    oc label integrationassembly --all --all-namespaces backup.integration.ibm.com/component=integrationassembly
    oc label messagingserver --all --all-namespaces backup.integration.ibm.com/component=messagingserver
    oc label messagingqueue --all --all-namespaces backup.integration.ibm.com/component=messagingqueue
    oc label messagingchannel --all --all-namespaces backup.integration.ibm.com/component=messagingchannel
    oc label messaginguser --all --all-namespaces backup.integration.ibm.com/component=messaginguser
    oc label assetrepository --all --all-namespaces backup.integration.ibm.com/component=assetrepository
    oc label configuration --all --all-namespaces backup.appconnect.ibm.com/component=configuration
    oc label api --all --all-namespaces backup.apiconnect.ibm.com/component=api
    oc label product --all --all-namespaces backup.apiconnect.ibm.com/component=product
    oc label dashboard --all --all-namespaces backup.appconnect.ibm.com/component=dashboard
    oc label designerauthoring --all --all-namespaces backup.appconnect.ibm.com/component=designerauthoring
    oc label integrationruntime --all --all-namespaces backup.appconnect.ibm.com/component=integrationruntime
    oc label integrationserver --all --all-namespaces backup.appconnect.ibm.com/component=integrationserver
    oc label switchserver --all --all-namespaces backup.appconnect.ibm.com/component=switchserver
    oc label eventstreams --all --all-namespaces backup.eventstreams.ibm.com/component=eventstreams
    oc label datapowerservice --all --all-namespaces backup.datapower.ibm.com/component=datapowerservice
    oc label kafkaconnect --all --all-namespaces backup.eventstreams.ibm.com/component=kafkaconnect
    oc label kafkatopic --all --all-namespaces backup.eventstreams.ibm.com/component=kafkatopic
    oc label kafkauser --all --all-namespaces backup.eventstreams.ibm.com/component=kafkauser
    oc label kafkabridge --all --all-namespaces backup.eventstreams.ibm.com/component=kafkabridge
    oc label kafkaconnector --all --all-namespaces backup.eventstreams.ibm.com/component=kafkaconnector
    oc label kafkarebalance --all --all-namespaces backup.eventstreams.ibm.com/component=kafkarebalance
    oc label apiconnectclusters --all --all-namespaces backup.apiconnect.ibm.com/component=apiconnectcluster
    oc label commonservice --all --all-namespaces backup.integration.ibm.com/component=commonservice

    If API Connect subsystems are deployed without the API Connect cluster CR, then label the API Connect subsystems and certificate issuers. Run these commands in each namespace where API Connect subsystems are deployed:

    oc label managementclusters --all backup.apiconnect.ibm.com/component=managementcluster
    oc label certificates.cert-manager.io ingress-ca backup.apiconnect.ibm.com/component=managementcluster 
    oc label certificates.cert-manager.io gateway-client-client backup.apiconnect.ibm.com/component=managementcluster
    oc label certificates.cert-manager.io portal-admin-client backup.apiconnect.ibm.com/component=managementcluster
    oc label certificates.cert-manager.io analytics-ingestion-client backup.apiconnect.ibm.com/component=managementcluster 
    oc label certificates.cert-manager.io api-manager-ca backup.apiconnect.ibm.com/component=managementcluster
    oc label secret api-manager-ca backup.apiconnect.ibm.com/component=managementcluster
    oc label secret portal-admin-client backup.apiconnect.ibm.com/component=managementcluster
    oc label secret gateway-client-client backup.apiconnect.ibm.com/component=managementcluster  
    
    oc label portalclusters --all backup.apiconnect.ibm.com/component=portalcluster
    oc label secret ingress-ca backup.apiconnect.ibm.com/component=portalcluster
    oc label issuers.cert-manager.io ingress-issuer backup.apiconnect.ibm.com/component=portalcluster                                                                            
    oc label issuers.cert-manager.io selfsigning-issuer backup.apiconnect.ibm.com/component=portalcluster
    
    oc label analyticsclusters --all backup.apiconnect.ibm.com/component=analyticscluster
    oc label issuers.cert-manager.io ingress-issuer backup.apiconnect.ibm.com/component=analyticscluster                                                                     
    oc label issuers.cert-manager.io selfsigning-issuer backup.apiconnect.ibm.com/component=analyticscluster
    oc label secret ingress-ca backup.apiconnect.ibm.com/component=analyticscluster 
    
    oc label gatewayclusters --all backup.apiconnect.ibm.com/component=gatewaycluster
    oc label issuers.cert-manager.io ingress-issuer backup.apiconnect.ibm.com/component=gatewaycluster
    oc label issuers.cert-manager.io selfsigning-issuer backup.apiconnect.ibm.com/component=gatewaycluster
    oc label secret gateway-service backup.apiconnect.ibm.com/component=gatewaycluster
    oc label secret gateway-peering backup.apiconnect.ibm.com/component=gatewaycluster
    oc label secret ingress-ca backup.apiconnect.ibm.com/component=gatewaycluster
    oc label secret admin-secret backup.apiconnect.ibm.com/component=gatewaycluster
    oc label secret api-manager-ca backup.apiconnect.ibm.com/component=gatewaycluster
    oc label certificates.cert-manager.io gateway-service backup.apiconnect.ibm.com/component=gatewaycluster 
    oc label certificates.cert-manager.io gateway-peering backup.apiconnect.ibm.com/component=gatewaycluster

    Queue managers need two labels to be set. For each Queue manager, run this patch command:

    oc patch queuemanager <queue manager name> --type merge --patch '{"metadata":{"labels":{"backup.mq.ibm.com/component":"queuemanager"}},"spec":{"labels":{"backup.mq.ibm.com/component":"nopodbackup"}}}'

    You only need to label the Queue managers you created. Queue managers owned by Integration assemblies or Messaging servers should not be labeled and will be restored by their owning resource.

    You must also label any configmaps and secrets that are used to configure the Queue managers:

    oc label configmap <configmap-name> backup.mq.ibm.com/component=configmap
    oc label secret <secret-name> backup.mq.ibm.com/component=secret
  6. Run the following command to label the pull secret for your entitlement key. This is the secret that you created if you installed Cloud Pak for Integration in an online environment. For more information, see Installing.

    oc label secret ibm-entitlement-key backup.integration.ibm.com/component=secret

    This must be run in each namespace you have deployed instances that use the pull secret.

  7. Run the following command to label the namespaces where you have instances, including the namespace where the foundational services workload is deployed.

    oc label namespace <your-namespace> backup.integration.ibm.com/component=namespace
  8. If you are backing up Declarative APIs or Declarative API Products, label any ConfigMaps and secrets that you create. These ConfigMaps and secrets are referenced when you use APIs and Products.

    oc label configmap <configmap-name> backup.integration.ibm.com/component=configmap
    oc label secret <secret-name> backup.integration.ibm.com/component=secret

    If you are backing up an API or Product that uses an Integration runtime, you must also backup and restore the Integration runtime.

  9. If you are backing up Enterprise gateway instances, label any secrets that those instances reference, such as secrets that contain user credentials:

    oc label secret <secret-name> backup.integration.ibm.com/component=secret

Create and configure the backup custom resource

OADP uses a custom resource of kind Backup to specify which resources are backed up and where the backups are stored.

  1. Copy the following YAML code to create a draft copy for modification:

    apiVersion: velero.io/v1
    kind: Backup
    metadata:
      name: integration-backup
      namespace: openshift-adp
    spec:
      ttl: 720h0m0s
      defaultVolumesToRestic: false
      includeClusterResources: true
      includedNamespaces:
      - '*'
      orLabelSelectors:
      - matchExpressions:
        - key: backup.integration.ibm.com/component
          operator: In
          values:
          - catalogsource
          - operatorgroup
          - subscription
          - assetrepository
          - platformnavigator
          - integrationassembly
          - messagingserver
          - messagingqueue
          - messagingchannel
          - messaginguser
          - commonservice
          - secret
          - configmap
          - namespace
      - matchExpressions:
        - key: backup.apiconnect.ibm.com/component
          operator: In
          values:
          - catalogsource
          - operatorgroup
          - subscription
          - api
          - product
          - apiconnectcluster
          - portalcluster
          - analyticscluster
          - managementcluster
          - gatewaycluster
          - secret
          - configmap
      - matchExpressions:
        - key: backup.datapower.ibm.com/component
          operator: In
          values:
          - catalogsource 
          - operatorgroup
          - subscription
          - datapowerservice
          - secret
          - configmap
      - matchExpressions:
        - key: backup.appconnect.ibm.com/component
          operator: In
          values:
          - catalogsource
          - operatorgroup
          - subscription
          - configuration
          - dashboard
          - designerauthoring
          - integrationruntime
          - integrationserver
          - switchserver
          - secret
          - configmap
      - matchExpressions:
        - key: backup.mq.ibm.com/component
          operator: In
          values:
          - catalogsource
          - operatorgroup
          - subscription
          - queuemanager
          - secret
          - configmap
      - matchExpressions:
        - key: backup.eventstreams.ibm.com/component
          operator: In
          values:
          - catalogsource
          - operatorgroup
          - subscription
          - eventstreams
          - kafkaconnect
          - kafkatopic
          - kafkauser
          - kafkabridge
          - kafkaconnector
          - kafkarebalance
          - secret
          - configmap
      - matchExpressions:
        - key: foundationservices.cloudpak.ibm.com
          operator: In
          values:
            - keycloak
  2. (Optional) Modify the matchExpressions sections as required. You can remove an operator from the backup by removing the matchExpressions section for it. For example if you don't want to backup any instance that is managed by the IBM App Connect operator, remove the matchExpression section that has a key value of backup.appconnect.ibm.com/component. You can also add or remove label values to backup components within an operator. For example, if you want to exclude operators because you back those up in a different way, remove the catalogsource, operatorgroup, and subscription label values. If you added your own labels with custom text, ensure that those labels are present.

    For example, a matchExpressions section for the IBM App Connect operator might look like this:

    - matchExpressions:
      - key: backup.appconnect.ibm.com/component
        operator: In
        values:
        - my-custom-label
        - catalogsource
        - operatorgroup
        - subscription
        - dashboard
        - designerauthoring
        - integrationruntime
        - integrationserver
        - secret
        - configmap

    Match expressions are useful if you installed multiple instances into a single namespace and you want to backup only some of them. For more information about matchExpressions, see Resources that support set-based requirements in the Kubernetes documentation.

  3. When your updates are complete, use your modified YAML code to create the Backup custom resource in your cluster. The backup process starts when you create the resource, and runs in the background, avoiding disruption to your system. You can create the resource by using the Red Hat OpenShift console or the CLI:

  4. Console

    1. Click the Plus (Import YAML) icon to open the YAML editor.

    2. Paste your modified YAML code into the editor.

    3. Click Create. The Backup custom resource is created in the namespace that is listed in the YAML code.

    CLI

    1. Save your modified YAML code in a text file.

    2. Log in to your cluster by running the oc login command.

    3. Set the OADP namespace (openshift-adp by default) as the default namespace by running the following command:

      oc project openshift-adp
    4. Apply the YAML code by running the following command, where <filename> is the name of the file that you saved:

      oc apply -f <filename>.yaml

    When the backup completes successfully, the status of your backup instance in the Backup tab of the OADP operator is completed. You can also view the status by running the following command:

    oc get backup.velero.io <integration_backup> -n openshift-adp -o jsonpath='{.status.phase}'

Test the backup while the primary system is running

Test your backup to ensure that you can successfully restore your system if a disaster occurs. For example, set up a new cluster, restore a backup into it, and check that the instances work as expected. Some instances might require further configuration to test while your primary cluster is running. The following list summarizes the actions to take for each instance:

  • Platform UI: Check that you can access the user interface.

  • Integration assembly: Check that the IntegrationAssembly custom resource has a status of Ready.

  • Automation assets: Check that you can see the expected assets in the restored instance

  • Enterprise gateway: Check that the DataPowerService custom resource has a status of Ready. OADP backs up and restores only the stateless components of the instance.

  • Kafka cluster: Because of the georeplication feature, you should be able to see information coming from the primary cluster.

  • Integration runtime: Use DNS router configuration to route data from the primary cluster to the test cluster, to imitate a failover situation and check that the restored backup in the new cluster works as expected.

Restoring a backup

Create and configure the restore custom resource

OADP uses a custom resource of kind Restore to specify which resources are restored and where from.

  1. Copy the following YAML code to create a draft copy for modification:

    apiVersion: velero.io/v1
    kind: Restore
    metadata:
      name: integration-restore
      namespace: openshift-adp
    spec:
      backupName: <integration-backup>
      includeClusterResources: true
      existingResourcePolicy: update
      restorePVs: true
      restoreStatus:
        includedResources:
        - apis.apiconnect.ibm.com
        - products.apiconnect.ibm.com
        - integrationkeycloakclients.keycloak.integration.ibm.com
        - integrationkeycloakusers.keycloak.integration.ibm.com
      hooks: {}
      includedNamespaces:
      - '*'
      itemOperationTimeout: 1h0m0s
      orLabelSelectors:
      - matchExpressions:
        - key: backup.integration.ibm.com/component
          operator: In
          values:
          - catalogsource
          - operatorgroup
          - subscription
          - assetrepository
          - platformnavigator
          - integrationassembly
          - messagingserver
          - messagingqueue
          - messagingchannel
          - messaginguser
          - commonservice
          - secret
          - configmap
          - namespace
      - matchExpressions:
        - key: backup.apiconnect.ibm.com/component
          operator: In
          values:
          - catalogsource
          - operatorgroup
          - subscription
          - api
          - product
          - apiconnectcluster
          - portalcluster
          - analyticscluster
          - managementcluster
          - secret
          - configmap
      - matchExpressions:
        - key: backup.datapower.ibm.com/component
          operator: In
          values:
          - catalogsource 
          - operatorgroup
          - subscription
          - datapowerservice
          - secret
          - configmap
      - matchExpressions:
        - key: backup.appconnect.ibm.com/component
          operator: In
          values:
          - catalogsource
          - operatorgroup
          - subscription
          - configuration
          - dashboard
          - designerauthoring
          - integrationruntime
          - integrationserver
          - switchserver
          - secret
          - configmap
      - matchExpressions:
        - key: backup.mq.ibm.com/component
          operator: In
          values:
          - catalogsource
          - operatorgroup
          - subscription
          - queuemanager
          - secret
          - configmap
      - matchExpressions:
        - key: backup.eventstreams.ibm.com/component
          operator: In
          values:
          - catalogsource
          - operatorgroup
          - subscription
          - eventstreams
          - kafkaconnect
          - kafkatopic
          - kafkauser
          - kafkabridge
          - kafkaconnector
          - kafkarebalance
          - secret
          - configmap
      - matchExpressions:
        - key: foundationservices.cloudpak.ibm.com
          operator: In
          values:
            - keycloak
  2. Replace <integration-backup> with the name of the backup to restore. If you are restoring into a new cluster that you prepared as described earlier, you can view a list of all the backups that were created in your old cluster, on the Backup tab of the OADP operator. You can also list the backups by running the oc get backups.velero.io -n openshift-adp command.

  3. (Optional) Edit the matchExpressions sections in the YAML code to add or remove resources that you want to restore, as you did for the Backup custom resource.
    Important: Some resources have dependencies on others, so even if you don't want to restore a resource, you might have to for dependent resources to start successfully after the restore operation. For example, the Integration dashboard (dashboard) and Integration design (designerauthoring) instances require the IBM Cloud Pak for Integration operator (subscription under backup.integration.ibm.com/component).
  4. Use your updated YAML code to create the Restore custom resource in your cluster, in the same way that you created the Backup custom resource. The restore process starts when you create the resource. When the restore completes successfully, the status of your restore instance in the Restore tab of the OADP operator is completed. You can also view the status by running the following command:

    oc get restore.velero.io -n openshift-adp <integration-restore> -o jsonpath='{.status.phase}'
    Tip: Some objects, such as pods, might remain in an error state until the associated operator recreates all the related resources (such as configmaps or secrets).

Check the restored backup

Verify that the expected Cloud Pak for Integration resources are restored and functioning correctly.

More information about labels

You might want to add your own custom labels to the instances in Cloud Pak for Integration. You might want to back up only some instances. For either scenario, you need to know more about Cloud Pak for Integration backup labels.

A backup label has the following format, where <api_string> is a string that represents the operator that provides the instance and <label_value> is the label value to apply to that instance:

backup.<api_string>.ibm.com/component=<label_value>

The following table shows the valid API strings:

Operator name API string
IBM Cloud Pak for Integration integration
IBM Automation Foundation assets integration
IBM App Connect appconnect
IBM DataPower Gateway datapower
IBM Event Streams eventstreams

The instances in Cloud Pak for Integration have the following labels added by default. However, your instances might not already have these labels (for example, if you are upgrading from an earlier release where backup labels were not present), so you must add them. You can also use your own text to add custom label values.

All operators in Cloud Pak for Integration have the following standard labels:

Operator resource Label value
Catalog source catalogsource
Operator group operatorgroup
Subscription subscription

The following table shows the default label strings for each instance type:

Instance type Label value
Platform UI platformnavigator
Integration assembly integrationassembly
Automation assets assetrepository
Declarative API api
Declarative API Product product
Messaging server messagingserver
Messaging queue messagingqueue
Messaging channel messagingchannel
Messaging user messaginguser
Integration dashboard dashboard
Integration design designerauthoring
Integration runtime integrationruntime
Integration server integrationserver
Enterprise gateway datapowerservice
Kafka cluster eventstreams
Kafka connect kafkaconnect
Kafka topic kafkatopic
Kafka user kafkauser
Kafka bridge kafkabridge
Kafka connector kafkaconnector
Kafka reblanace kafkarebalance
Foundational services commonservice

If you want to back up a specific instance, you can add a custom label to it. For example, the following label assigns a custom label value of my-instance to an instance that is provided by the IBM App Connect operator:

backup.appconnect.ibm.com/component=my-dashboard

In addition to adding labels by using the oc label command, as described earlier, you can add labels to an instance by using the Platform UI or by modifying the instance's YAML code. For more information about adding labels in the Platform UI, see Using the Platform UI.

For more information about labels, including allowed characters and length, see Labels and selectors in the Kubernetes documentation.

Troubleshooting

If the restore process failed due to existing objects in the restore location, you might achieve a successful restore by deleting the existing objects then running the restore process again. For example, if you attempt to restore a namespace that already exists, an error occurs because the restore process cannot add a Velero label to the namespace.

You can view logs by installing the Velero CLI and running the following commands:

velero backup describe <integration-backup> -n openshift-adp
velero restore logs <integration-restore> -n openshift-adp

For more information about troubleshooting, see Troubleshooting in the OADP section of the Red Hat OpenShift documentation.