Migrating profiling results after upgrading
In Cloud Pak for Data 5.0.3 and later, profiling results are stored in a PostgreSQL database instead of the asset-files storage. To make existing profiling results available after upgrading from an earlier release, migrate the results.
- Cloud Pak for Data 4.8
- Cloud Pak for Data 5.0.0, 5.0.1, or 5.0.2
To migrate profiling results:
-
Open a terminal.
-
Log in to Red Hat OpenShift Container Platform as a user with sufficient permissions to complete the task.
${OC_LOGIN}Remember:
OC_LOGINis an alias for theoc logincommand. For more information, see Setting up environment variables. -
In the
/tmpdirectory, create anoverride.yamlfile with the following content replacing the values in angle brackets (<>) as appropriate.namespace: <replace_with_cpd_instance_namespace> blockStorageClass: <replace_with_block_storage_class_value> fileStorageClass: <replace_with_file_storage_class_value> docker_registry_prefix: <replace_with_registry_value> use_dynamic_provisioning: true ansible_python_interpreter: /usr/bin/python3 allow_reconcile: true wdp_profiling_postgres_action: <migration action>Replace <migration action> with one of these values:
'MIGRATE': Profiling results are copied from the asset-files storage to a PostgreSQL database. The results are not removed from the asset-files storage. However, every result in the asset-files storage that was copied is renamed in the to indicate that it was already copied and is not copied again.
'MIGRATE' is the default value.
'REVERT': Revert renaming of copied profiling results. Use this option if you need to rerun the migration because the process failed or stopped before it was complete. By reverting the name changes, you ensure that all profiling results are picked up in the next migration run.
'CLEAN': Delete all results from the asset-files storage.
Important: The data is permanently deleted and can't be restored. Therefore, use this option only after all results are copied successfully and you do no longer need the results in the asset-files storage.'BOTH': This option combines the migration actions 'MIGRATE' and 'CLEAN'. After a profiling result is copied to the PostgreSQL database, it is immediately deleted from the asset-files storage.
For the remaining values to be replaced, refer to the information that you provided in the installation environment variables files. For more information, see Setting up environment variables.
-
Change to the IBM Knowledge Catalog operator project:
oc project ${PROJECT_CPD_INST_OPERATORS} -
Get the name of the
wkc-operatorpod:oc get po | grep wkc -
Copy the
override.yamlfile to the/tmpdirectory in thewkc-operatorpod. Replace <ibm-cpd-wkc-operator-xxxx> with the name that was returned in the previous step.oc cp /tmp/override.yaml <ibm-cpd-wkc-operator-xxxx>:/tmp/ -
Connect to the
wkc-operatorpod. Replace <ibm-cpd-wkc-operator-xxxx> with the name that was returned in the previous step.oc exec -it <ibm-cpd-wkc-operator-xxxx> bash -
Verify that the
override.yamlfile is available in the/tmpdirectory in thewkc-operatorpod. -
Start the migration. Replace <cpd-version> with the product version to which you upgraded, for example,
5.2.0.ansible-playbook /opt/ansible/<cpd-version>/roles/wkc-core/wdp_profiling_postgres_migration.yaml --extra=@/tmp/override.yaml -vvvv -
Monitor the status of the migration job:
-
Open a separate terminal.
-
Monitor the pod status:
oc get pod | grep wdp-profiling-postgres-migrationThe pod should be in status
Runningwhile the job is active and change to statusCompletedafter the job completes.You can check the log of the pod for the job progress. Replace <wdp-profiling-postgres-migration-xxxxxx> with the correct pod name.
oc logs -f <wdp-profiling-postgres-migration-xxxxxx> -n ${PROJECT_CPD_INST_OPERANDS} -
To verify successful migration, check the job status:
oc get job | grep migratThe job status must be
Complete. If the status isError, check the pod logs as described in the previous step.
-
Parent topic: Administering IBM Knowledge Catalog