Completing the catalog-api service migration

After you upgrade the common core services to IBM® Software Hub Version 5.2, the back-end database for the catalog-api service is migrated from CouchDB to PostgreSQL.

Who needs to complete this task?

Instance administrator An instance administrator can complete this task.

When do you need to complete this task?
Complete this task only if the following statements are true:
  • You upgraded from one of the following releases:
    • IBM Cloud Pak® for Data Version 4.8
    • IBM Cloud Pak for Data Version 5.0
    • IBM Software Hub Version 5.1
  • You upgrade an instance that includes the common core services.

Repeat as needed If you have multiple instances of IBM Software Hub on the cluster, repeat this task for each instance that you upgrade to Version 5.2.

1. Checking the migration method used

If you ran an automatic migration, the common core services waits for the migration jobs to complete before upgrading the components associated with the common core services.

If you ran a semi-automatic migration, the common core services runs the migration jobs while upgrading the components associated with the common core services.

  1. Run the following command to determine which migration method was used:
    oc describe ccs ccs-cr \
    --namespace ${PROJECT_CPD_INST_OPERANDS} \
    | grep use_semi_auto_catalog_api_migration

    Take the appropriate action based on the response returned by the oc describe command:

    Status Migration type What to do next
    The command returns an empty response Automatic Proceed to 4. Collecting statistics about the migration
    The command returns true Semi-automatic Proceed to 2. Checking the status of the migration jobs.

2. Checking the status of the migration jobs

Remember: This step is required only for semi-automatic migrations. If you completed an automatic migration, proceed to 4. Collecting statistics about the migration.
Check the migration status periodically. The following jobs might take some time to complete depending on number of assets to be migrated:
  • cams-postgres-migration-job
  • jobs-postgres-upgrade-migration

To check the status of the jobs:

oc get job cams-postgres-migration-job jobs-postgres-upgrade-migration \
--namespace ${PROJECT_CPD_INST_OPERANDS} \
-o custom-columns=NAME:.metadata.name,STATUS:.status.conditions[0].type,COMPLETIONS:.status.succeeded

The command returns output with the following format:

NAME                             STATUS    COMPLETIONS
cams-postgres-migration-job      Complete   1/1       
jobs-postgres-upgrade-migration  Complete   1/1        

Take the appropriate action based on the status of the jobs:

Status What to do next
The status of either job is Failed Contact IBM Support for assistance resolving the error.
The status of either job is InProgress Wait several minutes before checking the status of the jobs again.
The status of both jobs is Complete Proceed to 3. Completing the migration.

3. Completing the migration

Remember: This step is required only for semi-automatic migrations. If you completed an automatic migration, proceed to 4. Collecting statistics about the migration.
After both of the following jobs complete, you can complete the migration to PostgreSQL:
  • cams-postgres-migration-job
  • jobs-postgres-upgrade-migration
Important: It is strongly recommended that you minimize the number of updates to the database before you complete this step. Large or numerous write operations that occur during the migration will increase the time that the migration takes.

To complete the migration to PostgreSQL:

  1. Run the following command to continue the semi-automatic migration:
    oc patch ccs ccs-cr \
    --namespace ${PROJECT_CPD_INST_OPERANDS} \
    --type merge \
    --patch '{"spec": {"continue_semi_auto_catalog_api_migration": true}}'
  2. Wait for common core services custom resource to be Completed. This process takes at least 10 minutes. However, it might be significantly longer if any assets were changed during the common core services upgrade.

    To check the status of the custom resource, run:

    oc get ccs ccs-cr \
    --namespace ${PROJECT_CPD_INST_OPERANDS}

    The command returns output with the following format:

    NAME     VERSION   RECONCILED   STATUS      PERCENT   AGE
    ccs-cr   11.0.0    11.0.0       Completed   100%      1d

    Take the appropriate action based on the status of the jobs:

    Status What to do next
    The status of the custom resource is Failed Contact IBM Support for assistance resolving the error.
    The status of the custom resource is InProgress Wait several minutes before checking the status of the custom resource again.
    The status of the custom resource is Complete Proceed to 4. Collecting statistics about the migration.

4. Collecting statistics about the migration

  1. Save the following script on the client workstation as a file named migration_status.sh:
    #!/bin/bash
    
    # Set postgres connection parameters
    postgres_password=$(oc get secret -n ${PROJECT_CPD_INST_OPERANDS} ccs-cams-postgres-app -o json 2>/dev/null | jq -r '.data."password"' | base64 -d)
    postgres_username=cams_user
    postgres_db=camsdb
    postgres_migrationdb=camsdb_migration
    
    echo -e "======MIGRATION STATUS==========="
    
    # Total migrated database(s)
    databases=$(oc -n ${PROJECT_CPD_INST_OPERANDS} -c postgres exec ccs-cams-postgres-1 -- psql -t postgresql://$postgres_username:$postgres_password@localhost:5432/$postgres_migrationdb -c "select count(*) from migration.status where state='complete'" 2>/dev/null)
    if [ -n "$databases" ];then
      databases_no_space=$(echo "$databases" | tr -d ' ')
      echo "Total catalog-api databases migrated: $databases_no_space"
    else
      echo "Unable to fetch migration information for databases"
    fi
    
    # Total migrated assets
    assets=$(oc -n ${PROJECT_CPD_INST_OPERANDS} -c postgres exec ccs-cams-postgres-1 -- psql -t postgresql://$postgres_username:$postgres_password@localhost:5432/$postgres_db -c "select count(*) from cams.asset" 2>/dev/null)
    if [ -n "$assets" ];then
      assets_no_space=$(echo "$assets" | tr -d ' ')
      echo -e "Total catalog-api assets migrated: $assets_no_space\n"
    else
      echo "Unable to fetch migration information for assets"
    fi
  2. Run the migration_status.sh script:
    ./migration_status.sh
  3. Proceed to 5. Backing up the PostgreSQL database.

5. Backing up the PostgreSQL database

Back up the new PostgreSQL database:

  1. Save the following script on the client workstation as a file named backup_postgres.sh:
    #!/bin/bash
    
    # Make sure PROJECT_CPD_INST_OPERANDS is set
    if [ -z "$PROJECT_CPD_INST_OPERANDS" ]; then
      echo "Environment variable PROJECT_CPD_INST_OPERANDS is not defined. This environment variable must be set to the project where IBM Software Hub is running."
      exit 1
    fi
    
    echo "PROJECT_CPD_INST_OPERANDS namespace is: $PROJECT_CPD_INST_OPERANDS"
    
    # Step 1: Find the replica pod
    REPLICA_POD=$(oc get pods -n $PROJECT_CPD_INST_OPERANDS -l app=ccs-cams-postgres -o jsonpath='{range .items[?(@.metadata.labels.role=="replica")]}{.metadata.name}{"\n"}{end}')
    
    if [ -z "$REPLICA_POD" ]; then
      echo "No replica pod found."
      exit 1
    fi
    
    echo "Replica pod: $REPLICA_POD"
    
    # Step 2: Extract JDBC URI from a secret
    JDBC_URI=$(oc get secret ccs-cams-postgres-app -n $PROJECT_CPD_INST_OPERANDS -o jsonpath="{.data.uri}" | base64 -d)
    
    if [ -z "$JDBC_URI" ]; then
      echo "JDBC URI not found in secret."
      exit 1
    fi
    
    #  Set path on the pod to save the dump file 
    TARGET_PATH="/var/lib/postgresql/data/forpgdump"
    
    # Step 3: Run pg_dump with nohup inside the pod
    oc exec "$REPLICA_POD" -n $PROJECT_CPD_INST_OPERANDS -- bash -c "
      TARGET_PATH=\"$TARGET_PATH\"
      JDBC_URI=\"$JDBC_URI\"
      echo \"TARGET_PATH is $TARGET_PATH\"
      mkdir -p $TARGET_PATH &&
      chmod 777 $TARGET_PATH &&
      nohup bash -c '
        pg_dump $JDBC_URI -Fc -f $TARGET_PATH/cams_backup.dump > $TARGET_PATH/pgdump.log 2>&1 &&
        echo \"Backup succeeded. Please copy $TARGET_PATH/cams_backup.dump file from this pod to a safe place and delete it on this pod to save space.\" >> $TARGET_PATH/pgdump.log
      ' &
      echo \"pg_dump started in background. Logs: $TARGET_PATH/pgdump.log\"
    "
  2. Run the backup_postgres.sh script:
    ./backup_postgres.sh

    The script starts the backup in a separate terminal session.

  3. Set the REPLICA_POD environment variable:
    REPLICA_POD=$(oc get pods -n ${PROJECT_CPD_INST_OPERANDS} -l app=ccs-cams-postgres -o jsonpath='{range .items[?(@.metadata.labels.role=="replica")]}{.metadata.name}{"\n"}{end}')
  4. Open a remote shell in the replica pod:
    oc rsh ${REPLICA_POD}
  5. Change to the /var/lib/postgresql/data/forpgdump/ directory:
    cd /var/lib/postgresql/data/forpgdump/
  6. Run the following command to monitor the list of files in the directory:
    ls -lat
  7. Wait for the backup to complete. (This process can take several hours if the database is large.)
    Backup phase What to look for
    In progress During the backup, the size of the pgdump.log file increases.
    Complete The backup is complete when the script writes the following message to the pgdump.log file:
    Backup succeeded. Please copy /var/lib/postgresql/data/forpgdump/cams_backup.dump 
    file from this pod to a safe place and delete it on this pod to save space.
    Failed If the backup fails, the pgdump.log file will include error messages.

    If the backup fails, contact IBM Support.

    Append the pgdump.log file to your support case.

    Do not proceed to the next step unless the backup is complete.

  8. Set the POSTGRES_BACKUP_STORAGE_LOCATION environment variable to the location where you want to store the backup:
    export POSTGRES_BACKUP_STORAGE_LOCATION=<directory>
    Important: Ensure that you choose a location where the file will not be accidentally deleted.
  9. Copy the backup to the POSTGRES_BACKUP_STORAGE_LOCATION:
    oc cp ${REPLICA_POD}:/var/lib/postgresql/data/forpgdump/cams_backup.dump \
    $POSTGRES_BACKUP_STORAGE_LOCATION/cams_backup.dump
  10. Delete the backup from the replica pod:
    oc rsh $REPLICA_POD rm -f /var/lib/postgresql/data/forpgdump/cams_backup.dump
  11. Proceed to 6. Consolidating the PostgreSQL database.

6. Consolidating the PostgreSQL database

After you back up the new PostgreSQL database, you must consolidate all of the existing copies of identical data across governed catalogs into a single record so that all identical data assets share a set of common properties.

  1. Set the INSTANCE_URL environment variable to the URL of IBM Software Hub:
    export INSTANCE_URL=https://<URL>
    Tip: To get the URL of the web client, run the following command:
    cpd-cli manage get-cpd-instance-details \
    --cpd_instance_ns=${PROJECT_CPD_INST_OPERANDS}
  2. Get the name of a catalog-api-jobs pod:
    oc get pods -n ${PROJECT_CPD_INST_OPERANDS} \
    | grep catalog-api-jobs
  3. Set the CAT_API_JOBS_POD environment variable to the name a pod returned by the preceding command:
    export CAT_API_JOBS_POD=<pod-name>
  4. Open a Bash prompt in the pod:
    oc exec ${CAT_API_JOBS_POD} -n ${PROJECT_CPD_INST_OPERANDS} -it -- bash 
  5. Run the following command to set the AUTH_TOKEN environment variable:
    AUTH_TOKEN=$(cat /etc/.secrets/wkc/service_id_credential)
  6. Start the consolidation:
    curl -k -X PUT "${INSTANCE_URL}/v2/shared_assets/initialize_content?bss_account_id=999" \
         -H "Authorization: Basic $AUTH_TOKEN"

    The command returns a transaction ID.

    Important: Save the transaction ID so that you can refer to it later for debugging, if needed.
  7. Get the name of a catalog-api pod:
    oc get pods \
    -n ${PROJECT_CPD_INST_OPERANDS} \
    | grep catalog-api \
    | grep -v catalog-api-jobs
  8. Set the CAT_API_POD environment variable to the name a pod returned by the preceding command:
    export CAT_API_POD=<pod-name>
  9. Check the catalog-api pod logs to determine the status of the consolidation:
    1. Check for the following success message:
      oc logs ${CAT_API_POD} \
      -n ${PROJECT_CPD_INST_OPERANDS} \
      | grep "Initial consolidation with bss account 999 complete"
    2. Check for the following failure message:
      oc logs ${CAT_API_POD} \
      -n ${PROJECT_CPD_INST_OPERANDS} \
      | grep "Error running initial consolidation with resource key"
      • If the command returns a response the consolidation failed, try to consolidate the database again. If the problem persists, contact IBM Support..
      • If the command returns an empty response, proceed to the next step.
    3. Check for the following failure message:
      oc logs ${CAT_API_POD} \
      -n ${PROJECT_CPD_INST_OPERANDS} \
      | grep "Error executing initial consolidation for bss 999"
      • If the command returns a response the consolidation failed, try to consolidate the database again. If the problem persists, contact IBM Support..
      • If the command returns an empty response, proceed to the next step.
    4. If the preceding commands returned empty responses, wait 10 minutes before checking the pod logs again.

What to do if the consolidation completed successfully

If the PostgreSQL database consolidation was successful, wait several weeks to confirm that the projects, catalogs, and spaces in your environment are working as expected.

After you confirm that the projects, catalogs, and spaces are working as expected, run the following commands to clean up the migration resources:

  1. Delete the pods associated with the migration:
    oc delete pod \
    -n ${PROJECT_CPD_INST_OPERANDS} \
    -l app=cams-postgres-migration-app
  2. Delete the jobs associated with the migration:
    oc delete job \
    -n ${PROJECT_CPD_INST_OPERANDS} \
    -l app=cams-postgres-migration-app
  3. Delete the config maps associated with the migration:
    oc delete cm \
    -n ${PROJECT_CPD_INST_OPERANDS} \
    -l app=cams-postgres-migration-app
  4. Delete the secrets associated with the migration:
    oc delete secret \
    -n ${PROJECT_CPD_INST_OPERANDS} \
    -l app=cams-postgres-migration-app
  5. Delete the persistent volume claim associated with the migration:
    oc delete pvc cams-postgres-migration-pvc \
    -n ${PROJECT_CPD_INST_OPERANDS}