Completing setup tasks
Upgrade to IBM Software Hub Version 5.1 before IBM Cloud Pak for Data Version 4.8 reaches end of support. For more information, see Upgrading from IBM Cloud Pak for Data Version 4.8 to IBM Software Hub Version 5.1.
To make sure that all prerequisites for migration are met, complete a set of setup tasks in your Cloud Pak for Data Version 4.6 deployment.
Setup tasks to complete
- Setting environment variables
- Installing the latest version of the cpd-cli command-line interface
- Setting up the export-import utility
- Creating platform connections for rule output
- Granting access to all data quality projects
- Preparing user groups for catalog access
- Checking the migration status of quick scan jobs
- Checking whether the Identity Management Service is enabled
- Rebuilding the Solr index
- Increasing the expiry time for tokens
- Increasing the ephemeral storage for the wkc-data-rules pod
- Improving export performance
- Increasing the LTPA token timeout parameter
- Disabling global call logs
- Disabling service calls that are not related to migration
- Creating a db2dsdriver.cfg configuration file for migrating Db2 connections
Setting environment variables
Complete the following steps:
- Log in to Cloud Pak for Data as a user with the Red Hat® OpenShift® Container Platform
adminrole. - Open a bash shell:
bash - Set the following environment variables.
NAMESPACE=<namespace in CP4D> PROFILE_NAME=<cpd-cli profile name> CPU_ARCH=<x86_64 or ppc64le depending on the hardware on your Red Hat OpenShift Container Platform cluster> CP4D_HOST=<cp4d host>
Installing the latest version of the cpd-cli command-line interface
Ensure that you have the export and import utility set up. The latest version of the
cpd-cli command-line interface (CLI) and related modules must be installed. For
more information, Installing the Cloud Pak
for Data command-line interface (cpd-cli).
Setting up the export-import utility
cpd-cli export-import list aux-modules --namespace=${NAMESPACE} --profile=${PROFILE_NAME} --arch=${CPU_ARCH}Creating platform connections for rule output
To create platform connections for storing rule output tables, complete the steps described in Setting up platform connections.
Granting access to all data quality projects
To ensure that the administrator running the migration can access the data quality projects to be migrated, a system administrator must grant this user access to all data quality projects.
- Access the
iis-servicespod by using the following command:oc exec -it $(oc get pods -o custom-columns=":metadata.name" -l app=iis-services) bash - Run the following
command:
/opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.ia.server.accessAllProjects -value true - Exit from the pod:
exit
Preparing user groups for catalog access
Usually, you will already have information synced to the default catalog. So the default catalog will most likely be your choice as the target catalog for the migration. In this case, all users that had access to the information in the legacy components should also have access to the migrated data.
At any time, you can set up user groups to easily grant additional users access to the catalog. Create a user group and add that user group to the target catalog with the Editor role. Any user who is later added to that group gets editor access to the catalog. You can also use an existing user group instead of creating a new group. For more information, see Managing user groups.
The default platform administrator (the admin or cpadmin user)
must run the migration commands and should be added to the target catalog as a member with the
Admin role.
Checking the migration status of quick scan jobs
Make sure that all quick scan jobs and results that you still need in the upgraded environment are already migrated. If non-migrated jobs are left, evaluate whether you want to keep the information. For any of these quick scan jobs that you want to preserve, complete the steps in Migrating quick scan jobs.
Checking whether the Identity Management Service is enabled
oc get zenservice lite-cr -n ${NAMESPACE} -o jsonpath='{.spec.iamIntegration}'If the command returns true, the Identity Management Service is enabled. In this case, make sure to
follow the instructions for an Identity Management Service
enabled system.
Rebuilding the Solr index
https://${CP4D_HOST}/ibm/iis/dq/da/rest/v1/reindex?batchSize=25&solrBatchSize=100&upgrade=false&force=true- username
- The username of the default platform administrator:
cpadminoradmin - password
- If Identity Management Service is enabled, provide the API key. Otherwise, provide the administrator password .
- Log in to the
is-en-conductor-0pod as a user with administrator rights:oc exec -it is-en-conductor-0 bash - Run the following
command:
curl -k -X GET "https://is-servicesdocker:9446/ibm/iis/dq/da/rest/v1/reindex?batchSize=25&solrBatchSize=100&upgrade=false&force=true" -H 'Content-Type: application/json' -u isadmin:$ISADMIN_PASSWORD - Exit from the pod:
exit
Increasing the expiry time for tokens
- Check whether a token expiry time is set by running the following
command:
oc get configmap product-configmap --namespace=${NAMESPACE} -o custom-columns=test:.data.TOKEN_EXPIRY_TIME | sed '1d'If the parameter
TOKEN_EXPIRY_TIMEis set, note the original setting to be able to reset the value after the migration is complete. - Change the
TOKEN_EXPIRY_TIMEsetting to a large value, such as 48 hours, by running the following command:oc patch configmap product-configmap --namespace=${NAMESPACE} --type=merge --patch="{\"data\": {\"TOKEN_EXPIRY_TIME\": \"48\"}}" - Check whether a token refresh period is set by running the following
command:
oc get configmap product-configmap --namespace=${NAMESPACE} -o custom-columns=test:.data.TOKEN_REFRESH_PERIOD | sed '1d'If the parameter
TOKEN_REFRESH_PERIODis set, note the original setting to be able to reset the value after the migration is complete. - Change the
TOKEN_REFRESH_PERIODsetting to a large value, such as 48 hours, by running the following command:oc patch configmap product-configmap --namespace=${NAMESPACE} --type=merge --patch="{\"data\": {\"TOKEN_REFRESH_PERIOD\": \"48\"}}" - Restart the
usermgmtpods by running the following command:oc delete pods --namespace=${NAMESPACE} -l component=usermgmt
Increasing the ephemeral storage for the wkc-data-rules pod
wkc-data-rules pod when
migrating data quality rules, increase the pod's ephemeral storage. You can revert this setting
after migration or continue working with the increased storage settings.- Log in to the Red Hat OpenShift Container Platform as a user with administrator rights.
- Check whether a value for the
ephemeral-storageparameter is set for thewkc-data-rulespod and whether the value is less than2Gi.oc get deployment wkc-data-rules --output="jsonpath={.spec.template.spec.containers[*].resources.limits.ephemeral-storage}" && echo -e "\n" - If the
ephemeral-storageparameter values is less than2Gi, run the following command to set the value to2Gi:oc patch wkc wkc-cr -n ${NAMESPACE} --type merge -p '{"spec":{"wkc_data_rules_resources":{"requests":{"cpu":"100m","memory":"800Mi","ephemeral-storage":"50Mi"},"limits":{"cpu":1,"memory":"2048Mi","ephemeral-storage": "2Gi" }}}}'
Improving export performance
Before migrating, you can run the following steps to improve the performance of the migration during the export.
- Create additional indexes in the XMETA database and update the database statistics:
- Access the
iis-servicespod by using the following command:oc exec -it $(oc get pods -o custom-columns=":metadata.name" -l app=iis-services) bash - Run the
xmetaAdmincommands in theiis-servicespod to create extra indexes needed for better export performance:cd /opt/IBM/InformationServer/ASBServer/bin ./xmetaAdmin.sh addIndex -model ASCLModel -class DataFileFolder importedVia_DataConnection ASC -dbfile ../conf/database.properties ./xmetaAdmin.sh addIndex -model ASCLModel -class DataConnection accesses_DataStore ASC -dbfile ../conf/database.properties ./xmetaAdmin.sh addIndex -model DataStageX -class DSDataConnection accesses_DataStore ASC -dbfile ../conf/database.properties ./xmetaAdmin.sh addIndex -model ASCLModel -class DataCollection of_PhysicalModel ASC -dbfile ../conf/database.properties ./xmetaAdmin.sh addIndex -model ASCLLogicalModel -class Relationship of_LogicalModel ASC -dbfile ../conf/database.properties ./xmetaAdmin.sh addIndex -model ASCLModel -class HostSystem name ASC -dbfile ../conf/database.properties ./xmetaAdmin.sh addIndex -model ASCLModel -class Connector hostedBy_HostSystem ASC -dbfile ../conf/database.properties ./xmetaAdmin.sh addIndex -model ASCLModel -class Connector connectionType ASC -dbfile ../conf/database.properties ./xmetaAdmin.sh addIndex -model ASCLModel -class DataConnection usedBy_Connector ASC -dbfile ../conf/database.properties ./xmetaAdmin.sh addIndex -model DataStageX -class DSDataConnection usedBy_Connector ASC -dbfile ../conf/database.properties - Exit the
iis-servicespod:exit - Update the DBMS statistics for all the tables under the XMETA database before you run the migration.
- Log in to the
c-db2oltp-iis-db2u-0pod by using the following command:oc rsh c-db2oltp-iis-db2u-0 bash - To update the DBMS statistics, run the following commands:
db2 connect to xmeta db2 -x "SELECT 'runstats on table',substr(rtrim(tabschema)||'.'||rtrim(tabname),1,50),' and indexes all;' FROM SYSCAT.TABLES WHERE (type = 'T') AND (tabschema = 'XMETA')" > /tmp/runstats_xmeta.out db2 -tvf /tmp/runstats_xmeta.out - Exit the
c-db2oltp-iis-db2u-0pod:exit
- Log in to the
- Access the
- Create extra indexes for data assets in data quality projects:
- Log in to the
c-db2oltp-iis-db2u-0pod.oc rsh c-db2oltp-iis-db2u-0 bash - Change to the folder that contains the initialization
scripts:
cd /mnt/backup/initScripts - 4.8.04.8.1 Edit the
dq_create_indices.sql file.
4.8.2 or later Starting in Cloud Pak for Data 4.8.2, this step is optional.
Open the file in the vi editor and add the following entries at the end of the file:
-- index[1], 1.200MB CREATE INDEX "DB2INST1"."IDX2312060847540" ON "XMETA "."INVESTIGATE_TABLEQUALITYANALYSIS" ("OF_TABLEANALYSISMASTER_XMETA" ASC, "XMETA_REPOS_OBJECT_ID_XMETA" DESC) ALLOW REVERSE SCANS COLLECT SAMPLED DETAILED STATISTICS; COMMIT WORK ; -- index[2], 23.302MB CREATE INDEX "DB2INST1"."IDX2312060848270" ON "XMETA "."INVESTIGATE_EXECUTIONHISTORY" ("OF_QUALITYCOMPONENT_XMETA" ASC, "ENDTIME_XMETA" ASC, "STARTTIME_XMETA" ASC, "HAS_EXECUTIONRESULT_XMETA" ASC) ALLOW REVERSE SCANS COLLECT SAMPLED DETAILED STATISTICS; COMMIT WORK ; -- index[3], 1.802MB CREATE UNIQUE INDEX "DB2INST1"."IDX2312060848320" ON "XMETA "."INVESTIGATE_TABLEANALYSISMASTER" ("XMETA_REPOS_OBJECT_ID_XMETA" ASC) INCLUDE ("TABLEANALYSISSTATUS_XMETA", "ANALYSISMASTER_XMETA") ALLOW REVERSE SCANS COLLECT SAMPLED DETAILED STATISTICS; COMMIT WORK ; -- index[4], 0.618MB CREATE INDEX "DB2INST1"."IDX2312060848360" ON "XMETA "."INVESTIGATE_TABLEANALYSISMASTER_DATACOLLECTION_REFFROM_DATACOLLECTION" ("DATACOLLECTION_XMETA" ASC) ALLOW REVERSE SCANS COLLECT SAMPLED DETAILED STATISTICS; COMMIT WORK ; -- index[5], 0.743MB CREATE UNIQUE INDEX "DB2INST1"."IDX2312060848530" ON "XMETA "."INVESTIGATE_TABLEANALYSISSTATUS" ("XMETA_REPOS_OBJECT_ID_XMETA" ASC) INCLUDE ("DATAQUALITYANALYSISDATE_XMETA", "DATAQUALITYANALYSISSTATUS_XMETA") ALLOW REVERSE SCANS COLLECT SAMPLED DETAILED STATISTICS; COMMIT WORK ; - Run the following script to create the
indexes:
./dq_manage_indices.sh - Exit the
c-db2oltp-iis-db2u-0pod:exit
- Log in to the
Increasing the LTPA token timeout parameter
- Access the
iis-servicespod by using the following command:oc exec -it $(oc get pods -o custom-columns=":metadata.name" -l app=iis-services) bash - Update the /opt/IBM/InformationServer/wlp/usr/servers/iis/server.xml file.
Then find the
<ltpa expiration="795m"/>parameter and update this expiration value with a larger number. Change it to2880m, which corresponds to 48 hours. - Restart the IIS server using the following
commands:
/opt/IBM/InformationServer/initScripts/quiesceContainer.sh /opt/IBM/InformationServer/initScripts/unquiesceContainer.sh - Exit from the pod:
exit
Disabling global call logs
To prevent the CouchDB PVC from running out of space, disable global call logs.
- Run the following
command:
oc patch ccs ccs-cr --namespace=${NAMESPACE} --type merge --patch '{"spec": {"catalog_api_properties_global_call_logs": "false"}}' - Wait until the reconciliation of the Common Core Services CR is complete.You can check the status by using the following command:
oc get ccs ccs-cr --namespace ${NAMESPACE} -o jsonpath="{.status.ccsStatus}" - Restart the
catalog-apipods.- Find the name of the
catalog-apipods:oc get pods | grep catalog-api - Restart the pods by using the following command. Run the command for each ID that is returned by
the previous
command.
oc delete pod catalog-api-<id>
- Find the name of the
After migration is complete, you can enable the logs again by setting the
catalog_api_properties_global_call_logs to true.
Disabling service calls that are not related to migration
com.ibm.iis.ismigration property in the
iis-services pod before you start the
export:IIS_SERVICES_POD=`oc get pods -n ${NAMESPACE} -o custom-columns=POD:.metadata.name | grep iis-services`
oc exec -it ${IIS_SERVICES_POD} -n ${NAMESPACE} -\- bash -c "/opt/IBM/InformationServer/ASBServer/bin/iisAdmin.sh -set -key com.ibm.iis.ismigration -value true"
oc delete pod ${IIS_SERVICES_POD}Creating a db2dsdriver.cfg configuration file for migrating Db2 connections
- Find the Db2® connections in the legacy components.
- Log into the
iis-servicespod:oc exec -it $(oc get pods -o custom-columns=":metadata.name" -l app=iis-services) bash - Run the following command to check if the Db2 connections exist in the legacy
components:
/opt/IBM/InformationServer/ASBServer/bin/xmetaAdmin.sh query -expr "select dc.name as connection_name, dc.username as user_name, dc.connectionString as database_name from connector in Connector, dc in connector->uses_DataConnection where connector.name='DB2Connector'" -dbfile /opt/IBM/InformationServer/ASBServer/conf/database.properties http:///5.3/ASCLModel.ecore - Verify the output and check if the Db2 connections are valid and required for migration.
- Exit the
iis-servicepod:exit - Proceed with step 2 only if the Db2 connections are required for migration.
- Log into the
- Create a db2dsdriver.cfg configuration file for the Db2 database on the
is-en-conductor-0pod and make the configuration file available to the ASBNode agent and the Connector Access Service (CAS).- Log in to the
is-en-conductor-0pod as a user with administrator rights:oc exec -it is-en-conductor-0 bash - Set the following environment
variables:
DB2_INSTANCE_NAME=<db2-instance-name> OUTPUT_FOLDER=<output folder> - Create and populate db2dsdriver.cfg configuration file by running the
following command.
db2dsdcfgfill -i ${DB2_INSTANCE_NAME} -o ${OUTPUT_FOLDER} - Make sure that read permission to the generated db2dsdriver.cfg file is
granted to the group
Other users. Run the following command:chmod 644 ${OUTPUT_FOLDER}/db2dsdriver.cfg -
Check the content of the generated
db2dsdriver.cfgfile. If you find any local database entries with the settinghost="LOCALHOST"andport="0", replaceLOCALHOSTwith the correct hostname and update theportentry with the correct Db2 port number. Save your changes. - Make the db2dsdriver.cfg configuration file available to the ASBNode agent
and to CAS:
- Add the following environment variable to the
/opt/IBM/InformationServer/ASBNode/bin/NodeAgents_env_DS.sh:
export CC_DB2_CONNECTION_MIGRATION_DB2DSDRIVER_CFG_${DB2_INSTANCE_NAME}=${OUTPUT_FOLDER}/db2dsdriver.cfg - Restart the ASBNode agent by running the following commands. The user who starts the ASBNode
agent (the
rootor thedsadmuser) must have read permission on the db2dsdriver.cfg configuration file./opt/IBM/InformationServer/ASBNode/bin/NodeAgents.sh stop/opt/IBM/InformationServer/ASBNode/bin/NodeAgents.sh start
- Add the following environment variable to the
/opt/IBM/InformationServer/ASBNode/bin/NodeAgents_env_DS.sh:
- Log in to the
If you have multiple Db2 instances, complete these steps for each instance.
For some Db2 versions, running the db2dsdcfgfill command might not create the db2dsdriver.cfg configuration file in the specified folder. If this error occurs when you run the db2dsdcfgfill command, check the known issue IT38055: DB2DSDCFGFILL DOES NOT CREATE A DB2DSDRIVER.CFG FILE for affected versions and follow the instructions to resolve the problem.