Known issues and limitations for IBM Knowledge Catalog

The following known issues and limitations apply to IBM Knowledge Catalog.

Known issues

General

Installing, upgrading, and uninstalling

Migration and removal of legacy functions For known issues with migration from InfoSphere Information Server, see Known issues for migration from InfoSphere Information Server.

Catalogs and Projects

Governance artifacts

Metadata import

Metadata enrichment

Data quality

MANTA Automated Data Lineage for IBM Cloud Pak for Data

Lineage

Relationship explorer

Reporting

Also see:

Limitations

Catalogs and Projects

Governance artifacts

Metadata import

Metadata enrichment

Data quality

Lineage

General issues

You might encounter these known issues and restrictions when you work with the IBM Knowledge Catalog service.

Assets imported with the user admin instead of cpadmin

For Cloud Pak for Data clusters with Identity Management Service enabled, the default administrator is cpadmin. However, for import, the default administrative user admin is used. Therefore, the assets are imported with the admin user instead of cpadmin.

Applies to: 5.0.0 and later

Workaround:

Before running the import, apply the following workaround:

  1. Edit the config map by executing oc edit cm catalog-api-exim-cm

  2. Manually update the environment variable admin_username in import-job.spec.template.spec.env from:

    - name: admin_username
    value: ${admin_username}
    

    to:

    - name: admin_username
    value: cpadmin
    

Heavy I/O load can cause out-of-memory failures of the wkc-db2u instance

Applies to: 5.0.0 and later

After a metadata enrichment job fails, you see that the pods for the glossary service, data quality rules, and wkc-db2 were restarted. When you check the status of the wkc-db2 pod, you see the following error:

Error:
  terminated:
          exitCode: 143
          reason: OOMKilled

This error indicates that resource limits must be increased.

Workaround: Scale up the Db2 instance for the IBM Knowledge Catalog service on Cloud Pak for Data to enhance high availability and increase processing capacity for the IBM Knowledge Catalog service. Allocate additional memory and CPU resources to the existing Db2 deployment by completing these steps:

  1. Specify the CPU and memory limit. In this example, CPU is set to 8 vCPU and memory is set to 15 Gi. Modify the values according to your needs.

    oc patch db2ucluster db2oltp-wkc --type=merge --patch '{"spec": {
    "podConfig": {
        "db2u": {
            "resource": {
                "db2u": {
                    "limits": {
                        "cpu": "8",
                        "memory": "15Gi"
                    }
                }
            }
        }
    }
    }}'
    
  2. Wait for the c-db2oltp-wkc-db2u-0 pod to restart.

For more information, see Scaling up Db2 for IBM Knowledge Catalog. If needed, also complete steps 3 to 6 of the described procedure.

Search bar returning incorrect results

  • Searching for assets when using the search bar returns unexpected results if only one or two characters are used.

    Applies to: 5.0.0 and later

    Workaround: Type at least three characters in the search bar.

  • Searching for a string with more than 12 characters without a space returns no results.

    Applies to: 5.0.0
    Fixed in: 5.0.1

    Workaround: Type no more than 12 characters without a space in a search bar.

  • When searching for assets with a hyphen in the name, results are correct but the asset name is truncated on the results list.

    Applies to: 5.0.1
    Fixed in: 5.0.2

    Workaround: Hover over the asset name and the correct name is displayed in a tooltip.

Installing, upgrading and uninstalling

You might encounter these known issues while installing, upgrading or uninstalling IBM Knowledge Catalog.

When uninstalling, terminating PVCs might get stuck

Applies to: 5.0 and later

During the uninstall of IBM Knowledge Catalog, the PVC c-db2oltp-wkc-meta might get stuck in the Terminating state.

Note:

This might be seen in environments that have been upgraded, and will prevent new installations of IBM Knowledge Catalog unless the PVC is removed altogether.

When inspecting the PVC, it might show which pods are still using it, which prevents the PVC from being deleted. For example:

Used By:       c-db2oltp-wkc-11.5.8.0-cn1-to-11.5.8.0-cn5-f6d4j
               c-db2oltp-wkc-11.5.8.0-cn1-to-11.5.8.0-cn5-qhwzm
               db2u-ssl-rotate-db2oltp-wkc-k78kg

In the example, the c-db2oltp-wkc-meta PVC is still being used.

Workaround: To ensure the PVC is properly deleted, the completed pods and jobs that are still mounting the PVC must be manually deleted.

Follow these steps to manually delete the PVCs:

  1. Delete completed job db2u-ssl-rotate-db2oltp-wkc if it exists:

    oc delete job db2u-ssl-rotate-db2oltp-wkc -n ${PROJECT_CPD_INST_OPERANDS} --ignore-not-found
    
  2. Delete the completed upgrade pods, if they exist:

    oc delete po c-db2oltp-wkc-11.5.8.0-cn1-to-11.5.8.0-cn5-f6d4j -n ${PROJECT_CPD_INST_OPERANDS}
    oc delete po c-db2oltp-wkc-11.5.8.0-cn1-to-11.5.8.0-cn5-qhwzm -n ${PROJECT_CPD_INST_OPERANDS}
    

When installing or upgrading IBM Knowledge Catalog, the wdp-profiling-iae-thirdparty-lib-volume-instance job might fail

Applies to: 5.0.0 and later

During the deployment of IBM Knowledge Catalog the wdp-profiling-iae-thirdparty-lib-volume-instance job might fail and the following message will appear in the IBM Knowledge Catalog custom resource (CR):

Failed at task: Deploy job resource and wait for it to complete - Item: iae-thirdparty-lib-volume-instance
      The error was: Failed to patch object: b'{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Job.batch \\"wdp-profiling-iae-thirdparty-lib-volume-instance\\" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\\"\\", GenerateName:\\"\\", Namespace:\\"\\", SelfLink:\\"\\", UID:\\"\\", ResourceVersion:\\"\\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:\\u003cnil\\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\\"app\\":\\"wdp-profiling\\", \\"app.kubernetes.io/instance\\":\\"0075-wkc-lite\\", \\"app.kubernetes.io/managed-by\\":\\"Tiller\\", \\"app.kubernetes.io/name\\":\\"wdp-profiling-chart\\", \\"chart\\":\\"wdp-profiling-chart\\", \\"controller-uid\\":\\"67636a6b-1b5d-45c3-a960-d4b6948b3886\\", \\"helm.sh/chart\\":\\"wdp-profiling-chart\\", \\"heritage\\":\\"Tiller\\", \\"icpdsupport/addOnId\\":\\"wkc\\", \\"icpdsupport/app\\":\\"api\\", \\"icpdsupport/module\\":\\"wdp-profiling\\", \\"job-name\\":\\"wdp-profiling-iae-thirdparty-lib-volume-instance\\", \\"release\\":\\"0075-wkc-lite\\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\\"wkc.cpd.ibm.com/v1beta1\\", Kind:\\"WKC\\", Name:\\"wkc-cr\\", UID:\\"e59deb0f-2f4a-4e13-bfcf-274c6c267fa1\\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.PodSpec{Volumes:[]core.Volume{core.Volume{Name:\\"wdp-profiling-iae-thirdparty-lib-volume-config\\", VolumeSource:core.VolumeSource{HostPath:(*core.HostPathVolumeSource)(nil), EmptyDir:(*core.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*core.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*core.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*core.GitRepoVolumeSource)(nil), Secret:(*core.SecretVolumeSource)(nil), NFS:(*core.NFSVolumeSource)(nil), ISCSI:(*core.ISCSIVolumeSource)(nil), Glusterfs:(*core.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*core.PersistentVolumeClaimVolumeSource)(nil), RBD:(*core.RBDVolumeSource)(nil), Quobyte:(*core.QuobyteVolumeSource)(nil), FlexVolume:(*core.FlexVolumeSource)(nil), Cinder:(*core.CinderVolumeSource)(nil), CephFS:(*core.CephFSVolumeSource)(nil), Flocker:(*core.FlockerVolumeSource)(nil), DownwardAPI:(*core.DownwardAPIVolumeSource)(nil), FC:(*core.FCVolumeSource)(nil), AzureFile:(*core.AzureFileVolumeSource)(nil), ConfigMap:(*core.ConfigMapVolumeSource)(0xc0e8edf600), VsphereVolume:(*core.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*core.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*core.PhotonPersistentDiskVolumeSource)(nil), Projected:(*core.ProjectedVolumeSource)(nil), PortworxVolume:(*core.PortworxVolumeSource)(nil), ScaleIO:(*core.ScaleIOVolumeSource)(nil), StorageOS:(*core.StorageOSVolumeSource)(nil), CSI:(*core.CSIVolumeSource)(nil), Ephemeral:(*core.EphemeralVolumeSource)(nil), core.Volume{Name:\\"secrets-mount\\", VolumeSource:core.VolumeSource{HostPath:(*core.HostPathVolumeSource)(nil), EmptyDir:(*core.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*core.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*core.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*core.GitRepoVolumeSource)(nil), Secret:(*core.SecretVolumeSource)(nil), NFS:(*core.NFSVolumeSource)(nil), ISCSI:(*core.ISCSIVolumeSource)(nil), Glusterfs:(*core.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*core.PersistentVolumeClaimVolumeSource)(nil), RBD:(*core.RBDVolumeSource)(nil), Quobyte:(*core.QuobyteVolumeSource)(nil), FlexVolume:(*core.FlexVolumeSource)(nil), Cinder:(*core.CinderVolumeSource)(nil), CephFS:(*core.CephFSVolumeSource)(nil), Flocker:(*core.FlockerVolumeSource)(nil), DownwardAPI:(*core.DownwardAPIVolumeSource)(nil), FC:(*core.FCVolumeSource)(nil), AzureFile:(*core.AzureFileVolumeSource)(nil), ConfigMap:(*core.ConfigMapVolumeSource)(nil), VsphereVolume:(*core.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*core.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*core.PhotonPersistentDiskVolumeSource)(nil), Projected:(*core.ProjectedVolumeSource)(0xc06a6d6640), PortworxVolume:(*core.PortworxVolumeSource)(nil), ScaleIO:(*core.ScaleIOVolumeSource)(nil), StorageOS:(*core.StorageOSVolumeSource)(nil), CSI:(*core.CSIVolumeSource)(nil), Ephemeral:(*core.EphemeralVolumeSource)(nil)}, InitContainers:[]core.Container(nil), Containers:[]core.Container{core.Container{Name:\\"wdp-profiling-iae-thirdparty-lib-volume-instance\\", Image:\\"cp.icr.io/cp/cpd/wkc-init-container-wkc@sha256:d49ce63df1c06546a22b0c483b1e3f2a2159c3e30d81208b9e30105dbc2d7a0e\\", Command:[]string{\\"/bin/sh\\", \\"/wkc/genkeys.sh\\"}, Args:[]string(nil), WorkingDir:\\"\\", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource(nil), Env:[]core.EnvVar{core.EnvVar{Name:\\"GATEWAY_HOST\\", Value:\\"\\", ValueFrom:(*core.EnvVarSource)(0xc06a6d6560), Resources:core.ResourceRequirements{Limits:core.ResourceList{\\"cpu\\":resource.Quantity{i:resource.int64Amount{value:500, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\\"500m\\", Format:\\"DecimalSI\\"}, \\"memory\\":resource.Quantity{i:resource.int64Amount{value:256, scale:6}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\\"256M\\", Format:\\"DecimalSI\\", Requests:core.ResourceList{\\"cpu\\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\\"100m\\", Format:\\"DecimalSI\\"}, \\"memory\\":resource.Quantity{i:resource.int64Amount{value:128, scale:6}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\\"128M\\", Format:\\"DecimalSI\\"}, VolumeMounts:[]core.VolumeMount{core.VolumeMount{Name:\\"wdp-profiling-iae-thirdparty-lib-volume-config\\", ReadOnly:false, MountPath:\\"/wkc\\", SubPath:\\"\\", MountPropagation:(*core.MountPropagationMode)(nil), SubPathExpr:\\"\\"}, core.VolumeMount{Name:\\"secrets-mount\\", ReadOnly:true, MountPath:\\"/etc/.secrets\\", SubPath:\\"\\", MountPropagation:(*core.MountPropagationMode)(nil), SubPathExpr:\\"\\", VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(*core.Probe)(nil), ReadinessProbe:(*core.Probe)(nil), StartupProbe:(*core.Probe)(nil), Lifecycle:(*core.Lifecycle)(nil), TerminationMessagePath:\\"/dev/termination-log\\", TerminationMessagePolicy:\\"File\\", ImagePullPolicy:\\"IfNotPresent\\", SecurityContext:(*core.SecurityContext)(0xc0c3a7a060), Stdin:false, StdinOnce:false, TTY:false, EphemeralContainers:[]core.EphemeralContainer(nil), RestartPolicy:\\"Never\\", TerminationGracePeriodSeconds:(*int64)(0xc0e2059810), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\\"ClusterFirst\\", NodeSelector:map[string]string(nil), ServiceAccountName:\\"zen-norbac-sa\\", AutomountServiceAccountToken:(*bool)(0xc0e20596d5), NodeName:\\"\\", SecurityContext:(*core.PodSecurityContext)(0xc0551e50e0), ImagePullSecrets:[]core.LocalObjectReference(nil), Hostname:\\"\\", Subdomain:\\"\\", SetHostnameAsFQDN:(*bool)(nil), Affinity:(*core.Affinity)(0xc072d56870), SchedulerName:\\"default-scheduler\\", Tolerations:[]core.Toleration(nil), HostAliases:[]core.HostAlias(nil), PriorityClassName:\\"\\", Priority:(*int32)(nil), PreemptionPolicy:(*core.PreemptionPolicy)(nil), DNSConfig:(*core.PodDNSConfig)(nil), ReadinessGates:[]core.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), Overhead:core.ResourceList(nil), EnableServiceLinks:(*bool)(nil), TopologySpreadConstraints:[]core.TopologySpreadConstraint(nil), OS:(*core.PodOS)(nil): field is immutable","reason":"Invalid","details":{"name":"wdp-profiling-iae-thirdparty-lib-volume-instance","group":"batch","kind":"Job","causes":[{"reason":"FieldValueInvalid","message":"Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\\"\\", GenerateName:\\"\\", Namespace:\\"\\", SelfLink:\\"\\", UID:\\"\\", ResourceVersion:\\"\\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:\\u003cnil\\u003e, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\\"app\\":\\"wdp-profiling\\", \\"app.kubernetes.io/instance\\":\\"0075-wkc-lite\\", \\"app.kubernetes.io/managed-by\\":\\"Tiller\\", \\"app.kubernetes.io/name\\":\\"wdp-profiling-chart\\", \\"chart\\":\\"wdp-profiling-chart\\", \\"controller-uid\\":\\"67636a6b-1b5d-45c3-a960-d4b6948b3886\\", \\"helm.sh/chart\\":\\"wdp-profiling-chart\\", \\"heritage\\":\\"Tiller\\", \\"icpdsupport/addOnId\\":\\"wkc\\", \\"icpdsupport/app\\":\\"api\\", \\"icpdsupport/module\\":\\"wdp-profiling\\", \\"job-name\\":\\"wdp-profiling-iae-thirdparty-lib-volume-instance\\", \\"release\\":\\"0075-wkc-lite\\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\\"wkc.cpd.ibm.com/v1beta1\\", Kind:\\"WKC\\", Name:\\"wkc-cr\\", UID:\\"e59deb0f-2f4a-4e13-bfcf-274c6c267fa1\\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.PodSpec{Volumes:[]core.Volume{core.Volume{Name:\\"wdp-profiling-iae-thirdparty-lib-volume-config\\", VolumeSource:core.VolumeSource{HostPath:(*core.HostPathVolumeSource)(nil), EmptyDir:(*core.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*core.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*core.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*core.GitRepoVolumeSource)(nil), Secret:(*core.SecretVolumeSource)(nil), NFS:(*core.NFSVolumeSource)(nil), ISCSI:(*core.ISCSIVolumeSource)(nil), Glusterfs:(*core.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*core.PersistentVolumeClaimVolumeSource)(nil), RBD:(*core.RBDVolumeSource)(nil), Quobyte:(*core.QuobyteVolumeSource)(nil), FlexVolume:(*core.FlexVolumeSource)(nil), Cinder:(*core.CinderVolumeSource)(nil), CephFS:(*core.CephFSVolumeSource)(nil), Flocker:(*core.FlockerVolumeSource)(nil), DownwardAPI:(*core.DownwardAPIVolumeSource)(nil), FC:(*core.FCVolumeSource)(nil), AzureFile:(*core.AzureFileVolumeSource)(nil), ConfigMap:(*core.ConfigMapVolumeSource)(0xc0e8edf600), VsphereVolume:(*core.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*core.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*core.PhotonPersistentDiskVolumeSource)(nil), Projected:(*core.ProjectedVolumeSource)(nil), PortworxVolume:(*core.PortworxVolumeSource)(nil), ScaleIO:(*core.ScaleIOVolumeSource)(nil), StorageOS:(*core.StorageOSVolumeSource)(nil), CSI:(*core.CSIVolumeSource)(nil), Ephemeral:(*core.EphemeralVolumeSource)(nil), core.Volume{Name:\\"secrets-mount\\", VolumeSource:core.VolumeSource{HostPath:(*core.HostPathVolumeSource)(nil), EmptyDir:(*core.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*core.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*core.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*core.GitRepoVolumeSource)(nil), Secret:(*core.SecretVolumeSource)(nil), NFS:(*core.NFSVolumeSource)(nil), ISCSI:(*core.ISCSIVolumeSource)(nil), Glusterfs:(*core.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*core.PersistentVolumeClaimVolumeSource)(nil), RBD:(*core.RBDVolumeSource)(nil), Quobyte:(*core.QuobyteVolumeSource)(nil), FlexVolume:(*core.FlexVolumeSource)(nil), Cinder:(*core.CinderVolumeSource)(nil), CephFS:(*core.CephFSVolumeSource)(nil), Flocker:(*core.FlockerVolumeSource)(nil), DownwardAPI:(*core.DownwardAPIVolumeSource)(nil), FC:(*core.FCVolumeSource)(nil), AzureFile:(*core.AzureFileVolumeSource)(nil), ConfigMap:(*core.ConfigMapVolumeSource)(nil), VsphereVolume:(*core.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*core.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*core.PhotonPersistentDiskVolumeSource)(nil), Projected:(*core.ProjectedVolumeSource)(0xc06a6d6640), PortworxVolume:(*core.PortworxVolumeSource)(nil), ScaleIO:(*core.ScaleIOVolumeSource)(nil), StorageOS:(*core.StorageOSVolumeSource)(nil), CSI:(*core.CSIVolumeSource)(nil), Ephemeral:(*core.EphemeralVolumeSource)(nil)}, InitContainers:[]core.Container(nil), Containers:[]core.Container{core.Container{Name:\\"wdp-profiling-iae-thirdparty-lib-volume-instance\\", Image:\\"cp.icr.io/cp/cpd/wkc-init-container-wkc@sha256:d49ce63df1c06546a22b0c483b1e3f2a2159c3e30d81208b9e30105dbc2d7a0e\\", Command:[]string{\\"/bin/sh\\", \\"/wkc/genkeys.sh\\"}, Args:[]string(nil), WorkingDir:\\"\\", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource(nil), Env:[]core.EnvVar{core.EnvVar{Name:\\"GATEWAY_HOST\\", Value:\\"\\", ValueFrom:(*core.EnvVarSource)(0xc06a6d6560), Resources:core.ResourceRequirements{Limits:core.ResourceList{\\"cpu\\":resource.Quantity{i:resource.int64Amount{value:500, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\\"500m\\", Format:\\"DecimalSI\\"}, \\"memory\\":resource.Quantity{i:resource.int64Amount{value:256, scale:6}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\\"256M\\", Format:\\"DecimalSI\\", Requests:core.ResourceList{\\"cpu\\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\\"100m\\", Format:\\"DecimalSI\\"}, \\"memory\\":resource.Quantity{i:resource.int64Amount{value:128, scale:6}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\\"128M\\", Format:\\"DecimalSI\\"}, VolumeMounts:[]core.VolumeMount{core.VolumeMount{Name:\\"wdp-profiling-iae-thirdparty-lib-volume-config\\", ReadOnly:false, MountPath:\\"/wkc\\", SubPath:\\"\\", MountPropagation:(*core.MountPropagationMode)(nil), SubPathExpr:\\"\\"}, core.VolumeMount{Name:\\"secrets-mount\\", ReadOnly:true, MountPath:\\"/etc/.secrets\\", SubPath:\\"\\", MountPropagation:(*core.MountPropagationMode)(nil), SubPathExpr:\\"\\", VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(*core.Probe)(nil), ReadinessProbe:(*core.Probe)(nil), StartupProbe:(*core.Probe)(nil), Lifecycle:(*core.Lifecycle)(nil), TerminationMessagePath:\\"/dev/termination-log\\", TerminationMessagePolicy:\\"File\\", ImagePullPolicy:\\"IfNotPresent\\", SecurityContext:(*core.SecurityContext)(0xc0c3a7a060), Stdin:false, StdinOnce:false, TTY:false, EphemeralContainers:[]core.EphemeralContainer(nil), RestartPolicy:\\"Never\\", TerminationGracePeriodSeconds:(*int64)(0xc0e2059810), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\\"ClusterFirst\\", NodeSelector:map[string]string(nil), ServiceAccountName:\\"zen-norbac-sa\\", AutomountServiceAccountToken:(*bool)(0xc0e20596d5), NodeName:\\"\\", SecurityContext:(*core.PodSecurityContext)(0xc0551e50e0), ImagePullSecrets:[]core.LocalObjectReference(nil), Hostname:\\"\\", Subdomain:\\"\\", SetHostnameAsFQDN:(*bool)(nil), Affinity:(*core.Affinity)(0xc072d56870), SchedulerName:\\"default-scheduler\\", Tolerations:[]core.Toleration(nil), HostAliases:[]core.HostAlias(nil), PriorityClassName:\\"\\", Priority:(*int32)(nil), PreemptionPolicy:(*core.PreemptionPolicy)(nil), DNSConfig:(*core.PodDNSConfig)(nil), ReadinessGates:[]core.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), Overhead:core.ResourceList(nil), EnableServiceLinks:(*bool)(nil), TopologySpreadConstraints:[]core.TopologySpreadConstraint(nil), OS:(*core.PodOS)(nil): field is immutable","field":"spec.template"}]},"code":422}\n'
    reason: Failed

Symptoms: On some environments, during the wkc-cr reconciliation, the Openshift cluster (OC) tries to patch the wdp-profiling-iae-thirdparty-lib-volume-instance job and might fail.

Workaround: Delete the wdp-profiling-iae-thirdparty-lib-volume-instance job and continue with the wkc-cr reconciliation process by running:

oc delete job <cpd_instance> wdp-profiling-iae-thirdparty-lib-volume-instance

Categories might not be visible after installing

Applies to: 5.0.0, 5.0.1
Fixed in: 5.0.2

Categories might not be visible after installing IBM Knowledge Catalog.

Workaround:

  1. Log in to the cluster and run the following commands in the c-db2oltp-wkc-db2u-0 container:

    oc exec -it c-db2oltp-wkc-db2u-0 bash
    db2 connect to BGDB
    db2 "set session authorization \"999\""
    db2 "UPDATE BG.CATEGORY SET MIGRATION_STATUS='NOT_MIGRATED' WHERE ARTIFACT_ID='e39ada11-8338-3704-90e3-681a71e7c839'"
    
  2. After running the commands, exit the Db2 console.

  3. Run the following command from the command line interface (CLI):

    curl -X 'POST' \
        "https://$HOST/v3/categories/collaborators/bootstrap" \
        -H 'accept: application/json' \
        -H "Authorization: Bearer $TOKEN" \
     -d ''
    

    Set the HOST and TOKEN variables correctly.

    Note:

    To learn how to generate a token, see Generating a bearer token.

  4. Verify that the command completed successfully:

    curl -X 'GET' \
         "https://$HOST/v3/categories/collaborators/bootstrap/status" \
         -H 'accept: application/json' \
         -H "Authorization: Bearer $TOKEN"
    

    The command must return a SUCCESS message.

  5. Wait around for another minute for the caches to be rebuild, then refresh the categories page. The [Uncategorized] category is now visible.

In some upgrade scenarios, data source definition assignments might not work

When you upgrade Data Virtualization from a Cloud Pak for Data 4.8.5 system to Cloud Pak for Data 5.0.0, a data source definition is automatically created for Data Virtualization.

If the source system did not have IBM Knowledge Catalog installed and you install IBM Knowledge Catalog on the Cloud Pak for Data 5.0 system after you upgrade Data Virtualization, the assignment of the data source definition to connections and connected assets will not function properly.

Workaround: Delete the existing endpoints of the data source definition and then re-add them.

Can't install IBM Knowledge Catalog Standard or IBM Knowledge Catalog Premium through the Red Hat OpenShift console

Applies to: 5.0.0 and later

You can't install the IBM Knowledge Catalog Standard Cartridge (IKCStandard operator) or the IBM Knowledge Catalog Premium Cartridge (IKCPremium operator) by using the Red Hat OpenShift console.

Workaround: Install the cartridges by following the instructions in Installing IBM Knowledge Catalog.

In multi-pod configurations, event handling errors can occur

Applies to: 5.0.0 and later

In multi-pod configurations for semantic automation, map concepts and expand names may not work properly due to an error with event handling. This may result in failing Metadata enrichment jobs.

Workaround: To fix the error, you must reduce the number of replicas being used:

  1. Set the IBM Knowledge Catalog Standard operator to maintenance mode without changing the scale:
    oc patch ikcstandard ikc-standard-cr --patch '{"spec": {"ignoreForMaintenance": true}}' --type='merge'
    
  2. Reduce the number of replicas:
    oc scale deploy semantic-automation --replicas=1
    

When setting memory limits for semantic automation, the incorrect values are used

Applies to: 5.0.0 and later

When setting up the memory limits for semantic automation in the YAML file, you may run into issues when using decimal values to set the memory.

Workaround: Change the memory for limits and requests to non-decimal values, such as millibyte (Mi). For example instead of using:

memory: 1.8G

Use:

memory: 1800Mi

To change the memory values, run:

oc edit deploy semantic-automation -n ${PROJECT_CPD_INST_OPERANDS}
Note: This applies to all deployments as this issue is in the Openshift Container Platform.

When uninstalling IBM Knowledge Catalog, the cleanup process may not function properly

Applies to: 5.0.1
Fixed in: 5.0.2

When uninstalling IBM Knowledge Catalog, the process doesn't cleanup resources correctly, leaving IBM Knowledge Catalog pods undeleted.

Workaround: To fix this issue, you will need to delete the pods:

  1. Make sure that the wkc-cr custom resource has been deleted during the uninstall:
    oc get wkc -n ${PROJECT_CPD_INST_OPERANDS}
    
  2. If the wkc-cr has been deleted, check which pods are still remaining:
    oc get pod -n ${PROJECT_CPD_INST_OPERANDS} -l release=0075-wkc-lite
    
  3. Based on the pods that are still remaining, find the deployments of the undeleted pods:
    oc get deploy -n ${PROJECT_CPD_INST_OPERANDS} -l release=0075-wkc-lite
    
  4. Delete the deployments for each of the undeleted pods:
    oc delete deploy <service deployment name> -n ${PROJECT_CPD_INST_OPERANDS}
    

When uninstalling and re-installing IBM Knowledge Catalog, there might be an issue with Data Refinery

Applies to: 5.0.2
Fixed in: 5.0.3

When you uninstall and re-install IBM Knowledge Catalog, there is an issue with the installation of Data Refinery, which is installed automatically with IBM Knowledge Catalog in Version 5.0.2.

Workaround:

Important: Follow the steps before the uninstall, or after the re-install failure occurs.
  1. Before you run the shutdown/uninstall command, open a terminal, ensure you login to the cluster using the oclogin command, then manually delete the ZenExtension CR for DataRefinery using the following command:

    oc -n <cpd_instance> delete ZenExtension/data-refinery-routes
    
  2. If the deletion still hangs after a few minutes, open a seperate terminal and run the following command:

    oc -n <cpd_instance> patch ZenExtension/data-refinery-routes --type=merge -p '{"metadata": {"finalizers":null}}'
    
  3. The oc command from the previous terminal should complete immediately.

  4. Issue the shutdown/uninstall command.

After the upgrade to 5.0.3, predefined roles are missing permissions

Applies to: 5.0.3

After the upgrade from IBM Knowledge Catalog 4.7.x or 4.8.x to IBM Knowledge Catalog 5.0.3 or IBM Knowledge Catalog Premium 5.0.3, some permissions are missing from Data Quality Analyst and Data Steward roles. Users with these roles might not be able to run metadata imports or access any governance artifacts.

Workaround: To add any missing permissions to the Data Quality Analyst and Data Steward roles, restart the zen-watcher pod by running the following command:

oc delete pod $(oc get po -n ${PROJECT_CPD_INST_OPERANDS} -o custom-columns="Name:metadata.name" -l app.kubernetes.io/component=zen-watcher --no-headers) -n ${PROJECT_CPD_INST_OPERANDS}

Enabling semantic enrichment features fails during installation of IBM Knowledge Catalog Premium Cartridge 5.0.3

Applies to: 5.0.3

If you set the install option enableSemanticAutomation to true when you install IBM Knowledge Catalog Premium 5.0.3, this setting is not propagated to the ikc_standard custom resource. Thus, the semantic enrichment features are not enabled and inference foundation models (watsonx_ai_ifm) are not installed.

Workaround: Update the ikc_standard custom resource manually after the installation is complete:

oc patch ikcstandard ikc-standard-cr --type=merge -p '{"spec":{"enableSemanticAutomation": true}}'

Catalog and project issues

You might encounter these known issues and restrictions when you use catalogs.

Missing previews

Applies to: 5.0.0 and later

You might not see previews of assets in these circumstances:

  • In a catalog or project, you might not see previews or profiles of connected data assets that are associated with connections that require personal credentials. You are prompted to enter your personal credentials to start the preview or profiling of the connection asset.
  • In a catalog, you might not see previews of JSON, text, or image files that were published from a project.
  • In a catalog, the previews of JSON and text files that are accessed through a connection might not be formatted correctly.
  • In a project, you cannot view the preview of image files that are accessed through a connection.

Details for masked columns display incorrectly

Applies to: 5.0.0 and later

In the asset preview page, which is known to happen for virtualized join views and watsonx.data connected data, the value for the Masked columns displays an incorrect count. In addition, the masked indicator icon image A masked columns indicator icon is either missing or incorrectly displayed from the header of columns with masked data.

When a deep enforcement solution is configured to protect a data source, protection is subject to that configured deep enforcement solution to apply column masking. Each protection solution has its own semantics for applying data masking and thus masking indicators that are displayed in the user interface might not align with the actual columns masked.

For details on how masking rules apply to virtualized views, see Authorization model for views.

Workaround: None.

Unauthorized users might have access to profiling results

Applies to: 5.0.0 and later

Users who are collaborators with any role in a project or a catalog can view an asset profile even if they don't have access to that asset at the data source level or in Data Virtualization.

Workaround: Before you add users as collaborators to a project or a catalog, make sure they are authorized to access the assets in the container and thus to view the asset profiles.

Duplicate action fails when IP address changes

Applies to: 5.0.0

If the connection is using a hostname with a dynamic IP address, duplicate actions might fail during connection creation.

Cannot run import operations on a container package exported from another Cloud Pak for Data cluster

Applies to: 5.0.0 and later

When importing a container package exported from another Cloud Pak for Data cluster, permissions must be configured on the archive to allow export operations on the target cluster to access the files within the archive.

Workaround: To extract the export archive and modify permissions, complete the following steps:

  1. Create a temporary directory by running:
    mkdir temp_directory
    
  2. Extract the archive by running:
    tar -xvf cpd-exports-<export_name>-<timestamp>-data.tar --directory temp_directory
    
  3. Clients will need to run the following command on the target cluster:
    oc get ns $CLUSTER_CPD_NAMESPACE -o=jsonpath='{@.metadata.annotations.openshift\.io/sa\.scc\.supplemental-groups}'
    
    Example output: 1000700000/10000.
  4. The first part of the output of the previous step (ex. 1000700000) will need be applied as the new ownership on all files within the archive. Example:
    cd temp_directory/
    chown -R 1000700000:1000700000 <export_name>
    
  5. Archive the fixed files with the directory, using the same export name and timestamp as the original exported tar:
    tar -cvf cpd-exports-<export_name>-<timestamp>-data.tar <export_name>/
    
  6. Upload the archive.

Data protection rules don't apply to column names that contain spaces

Applies to: 5.0.0 and later

If a column name contains trailing or leading spaces during import, the column cannot be masked using data protection rules.

Workaround: When you're importing columns, ensure that column names don't contain trailing or leading spaces.

Preview of data from file-based connections other than IBM Cloud Object Storage is not fully supported

Applies to: 5.0.0 and later

Connected assets from file-based connections other than IBM Cloud Object Storage do not preview correctly. Data might appear in a table with missing and/or incorrect data. There is no workaround at this time.

Scroll bar is not visible when adding assets to a project on MacOS

When adding assets to a project, the scroll bar might not be available in the Selected assets table, showing a maximum of 5 assets.

Applies to: 5.0.0 and later

Workaround: Change the MacOS settings:

  1. Click the Apple symbol in the top-left corner of your Mac's menu bar, then click System Settings.
  2. Scroll down and select Appearance.
  3. Under the Show scroll bars option, click the radio button next to Always.

Can't edit an SQL-based data asset after profiling a visualization asset created from it

Applies to: 5.0.0, 5.0.1, and 5.0.2
Fixed in: 5.0.3

If you create a visualization asset from a query-based data asset and profile the visualization asset, editing the original SQL-based data asset results in an error.

Global search results display incomplete asset names when the name of the searched asset is hyphenated

Applies to: 5.0.0
Fixed in: 5.0.1

When you are searching for an asset that has a hyphenated name by using the global search bar, the search returns the correct asset, but shows incomplete asset name. For example, when you type auto-user in the search bar, you get the results as if you typed auto only.

Workaround: To show the complete asset name, click the Preview button next to the asset to show it in the right hand panel or hover over the asset name.

Automatic profiling fails for migrated catalogs that include connected assets on which data protection rules were applied

Applies to: 5.0.0 and later
Fixed in: 5.0.3

When you migrate a catalog that includes connected assets on which data protection rules were applied by using cpd-cli export-import commands, automatic profiling does not work on these catalog assets. Unless the assets are manually profiled, only asset owners can access them.

Workaround: The asset owner must manually profile every asset before a catalog can be used by other users.

Can't add asset members if the catalog collaborator role comes from a group

Applies to: 5.0.0
Fixed in: 5.0.1

You can't add asset members if the Admin or Editor catalog collaborator roles are granted to your user through a user group.

Workaround: Grant the Admin or Editor catalog collaborator role directly to the user. If you choose to grant the Editor catalog collaborator role to your user, you must also ensure that the user is an asset owner or asset editor.

Incorrect message when the connection doesn't allow for access to an asset

Applies to: 5.0.0
Fixed in: 5.0.1

Access to an asset, for example when you try to view the exception values for an asset, can be blocked for these reasons:

  • The asset is protected by one or more data protection rules.
  • The connection for this asset is configured with personal credentials.

However, the message that is shown mentions only protection rules as the cause for denying access.

Project names not displayed in global search results

Applies to: 5.0.0 and later
Fixed in: 5.0.3

If you use global search to find projects, an error is returned and the project name isn't displayed. Instead, you can see the internal project ID. You can't use project IDs to search for projects.

Can't preview assets with date columns and watsonx.data as the protection solution

Applies to: 5.0.0 or later

The following error shows in the asset preview page, which is known to happen for watsonx.data connected tables and the catalog assets are configured with watsonx.data as the deep enforcement solution:

An error occurred attempting to preview this asset

The cause of the error depends on which version of the platform you are using:

  • For Cloud Pak for Data 5.0.0, watsonx.data cannot mask date columns. And no workaround is available.
  • For Cloud Pak for Data 5.0.1 or later, watsonx.data fails to mask date columns except when the format is in yyyy-mm-dd. To workaround the other date formats, you can update the profile of the asset and remove the inferred data class by changing the column to a different data class or No class detected. Now, you can preview the assets and the date column masks correctly.

Profiling might fail in GPU environments

Applies to: 5.0.2
Fixed in: 5.0.3

In clusters with GPU, profiling in catalogs or projects might fail intermittently. As a result, services that rely on profiling, such as semantic enrichment in metadata enrichment, also fail.

Workaround: Retry data profiling or rerun the metadata enrichment job.

Unexpected assets filtering results in catalogs

Applies to: 5.0.2 and later

In catalogs, when you are searching for an asset by using Find assets field, the search might return assets whose names don't match the name string that you typed in the search field and assets that contain a keyword in a property or a related item associated with the typed name string.

Connection asset type doesn't get permanently deleted after the removal

Applies to: 5.0.2 and later

Asset type Connection does not get deleted immediately after the removal, even if the asset removal configuration is set to Purge assets automatically upon removal in the catalog UI, and is showing in trash.

Can't scroll down when editing users or user groups custom properties on columns

Applies to: 5.0.3

In Google Chrome, when you are editing custom properties for a user or user group, the focus is forced to the top of the dialog window and you can't scroll down.

Workaround: Use a different browser, for example Mozilla Firefox.

Can't go back to the catalogs page from the the catalog asset page

Applies to: 5.0.2 and later

If you log in directly to the catalog asset page, the Catalogs link is missing from the breadcrumb menu and you can't go to the Catalogs page.

Workaround: Go to Administration > Catalogs.

Credentials are cleared on the Profiling unlocking connection page

Applies to: 5.0.2

When you try to unclock a personal credentials connection from the Profiling page and select the authentication method after you enter the user and password, these fields are cleared.

Workaround: Unlock the connection from the Asset preview page.

Can't create a connection if you're including a reference connection

Applies to: 5.0.2 and later

When you're adding connections that contain references to catalogs, you might see the following error: Unable to create connection</br>An unexpected error occurred of type Null pointer error. No further error information is available.

Workaround: Reference connections are not supported. Ensure that the platform connection doesn't contain any reference connections.

Governance artifacts issues

You might encounter these known issues and restrictions when you use governance artifacts.

Error Couldn't fetch reference data values shows up on screen after publishing reference data

Applies to: 5.0.0 and later

When new values are added to a reference data set, and the reference data set is published, the following error is displayed when you try to click on the values:

Couldn't fetch reference data values. WKCBG3064E: The reference_data_value for the reference_data which has parentVersionId: <ID> and code: <code> does not exist in the glossary. WKCBG0001I: Need more help?

When the reference data set is published, the currently displayed view changes to Draft-history as marked by the green label on the top. The Draft-history view does not allow to view the reference data values.

Workaround: To view the values, click Reload artifact so that you can view the published version.

Publishing large reference data sets fails with Db2 transaction log full

Applies to: 5.0.0 and later

Publishing large reference data sets might fail with a Db2 error such as:

The transaction log for the database is full. SQLSTATE=57011

Workaround: Publish the set in smaller chunks, or increase Db2 transaction log size as described in the following steps.

  1. Modify the transaction log settings with the following commands:

    db2 update db cfg for bgdb using LOGPRIMARY 5 --> default value, should not be changed
    db2 update db cfg for bgdb using LOGSECOND 251
    db2 update db cfg for bgdb using LOGFILSIZ 20480
    
  2. Restart Db2.

You can calculate the required transaction log size as follows:

(LOGPRIMARY + LOGSECOND) * LOGFILSIZ

For publishing large sets, the following Db2 transaction log sizes are recommended:

  • 5GB for 1M reference data values and 300K relationships
  • 20GB for 1M reference data values and 1M relationships
  • 80GB for 1M reference data values and 4M relationships

where the relationship count is the sum of the parent, term and value mapping relationships for reference data values in the set.

Importing Knowledge Accelerator artifacts fails when metadata import jobs are running

If you are running multiple large metadata import jobs and try to import a large number of artifacts from Knowledge Accelerators at the same time, the publishing process of these artifacts might fail.

Applies to: 5.0.0, 5.0.1
Fixed in: 5.0.2

Workaround: Make sure no metadata import jobs are running when you import Knowledge Accelerators.

Bulk edit for relationships does not work on reference data sets

When trying to bulk edit relationships for multiple reference data sets, an error message is returned and the changes can't be saved.

Applies to: 5.0.1
Fixed in: 5.0.2

Workaround: Update relationships for each reference data set individually.

Bulk edit for multiple relationships assigns them incorrectly

When trying to update more than one relationship type in a bulk edit for multiple artifacts, the relationships are assigned incorrectly, for example, related terms from the first relationship type are also added in the other types.

Applies to: 5.0.1
Fixed in: 5.0.2

Workaround: Update just one type of a relationship at once.

Bulk edit only works on the artifacts shown on current pagination

When selecting all artifacts for a bulk edit, only the artifacts listed on current pagination are processed.

Applies to: 5.0.1
Fixed in: 5.0.2

Workaround: Change the pagination setting to 200 artifatcs per page. You can only edit 200 artifacts at once.

Bulk edit for parent category does not work correctly when moving artifacts to [uncategorized]

When moving multiple artifacts to [uncategorized] category in a bulk action, old category is still displayed on draft lists and some permission are being checked against the old category.

Applies to: 5.0.1
Fixed in: 5.0.2

Workaround: If you have not published the drafts yet, delete the workflow task and move the artifacts one by one. If they were already published, bulk move these artifacts to some other category (but do not use the old one, as it may fail due to conflicts) and then move them again to [uncategorized] one by one.

Metadata import issues

You might encounter these known issues when you work with metadata import.

Assets are not imported from the IBM Cognos Analytics source when the content language is set to Japanese

Applies to: 5.0.0 and later

If you want to import metadata from the Cognos Analytics connection, where the user's content language is set to Japanese, no assets are imported. The issue occurs when you create a metadata import with the Get BI report lineage goal.

Workaround: In Cognos Analytics, change the user's content language from Japanese to English. Find the user for which you want to change the language, and change this setting in the Personal tab. Run the metadata import again.

When you import a project from a .zip file, the metadata import asset is not imported

Applies to: 5.0.0 and later

When you import a project from a file, metadata import assets might not be imported. The issue occurs when a metadata import asset was imported to a catalog, not to a project, in the source system from which the project was exported. This catalog does not exist on the target system and the metadata import asset can't be accessed.

Workaround: After you import the project from a file, duplicate metadata import assets and add them to a catalog that exists on the target system. For details, see Duplicating a metadata import asset.

Lineage metadata cannot be imported from the Informatica PowerCenter connection

Applies to: 5.0.0 and later

When you import lineage metadata from the Informatica PowerCenter connection, the metadata job run fails with the following message:

400 [Failed to create discovery asset. path=/GLOBAL_DESEN/DM_PES_PESSOA/WKF_BCB_PES_PESSOA_JURIDICA_DIARIA_2020/s_M_PEJ_TOTAL_03_CARREGA_ST3_2020/SQ_FF_ACFJ671_CNAE_SECUND�RIA details=ASTSV3030E: The field 'name' should contain valid unicode characters.]",
"more_info" : null

Workaround: Ensure that the encoding value is the same in the workflow file in Informatica PowerCenter and in the connection that was created in Automatic Data Lineage. If the values are different, use the one from the Informatica PowerCenter workflow file.
To solve the issue, complete these steps:

  1. Open Automatic Data Lineage:

    https://<CPD-HOSTNAME>/manta-admin-gui/
    
  2. Go to Connections > Data Integration Tools > IFPC and select the connection for which the metadata import failed.

  3. In the Inputs section, change the value of the Workflow encoding parameter to match the value from the Informatica PowerCenter workflow file.

  4. Save the connection.

  5. In IBM Knowledge Catalog, reimport assets for the metadata import that failed.

Related assets are not displayed in lineage that was created from the Greenplum connection

Applies to: 5.0.0 and later

When you import lineage metadata from the Greenplum connection by using the Get lineage or Get ETL lineage option, related assets are not included in the lineage.

Not all assets are imported after upgrading from 4.7.3 to 5.0

Applies to: 5.0.0 and later

When you import assets with the Get ETL lineage goal, not all assets might be imported. In Automatic Data Lineage, errors are displayed in the workflow logs.

Running lineage metadata import on Microsoft SQL Server configured with New Technology LAN Manager (NTLM) fails

Applies to: 5.0.0
Fixed in: 5.0.1

When you are running lineage metadata import on Microsoft SQL Server with NTLM authentication enabled and Get lineage option for metadata import, your import fails.

Workaround:

  1. Log in to the OpenShift Container Platform cluster by using the web brower.

  2. Navigate to Workloads > ConfigMaps.

  3. Find metadata-discovery-service-config and go to the YAML tab.

  4. Edit the following field from:

    data:
      manta_scanner_validation_enabled: 'true'
    

    to:

    data:
      manta_scanner_validation_enabled: 'false'
    
  5. Click Save.

  6. Navigate to Workloads > Pods and locate metadata-discovery and wkc-metadata-imports-ui pods.

  7. Click Delete Pod to restart them.

  8. In IBM Knowledge Catalog go to your project.

  9. Click New Asset and select Metadata Import.

  10. Under Get lineage section, select Get lineage.

  11. Specify the name of the import, target catalog, connection details and scope of metadata to import.
    Metadata import job will get created, which will fail on the IBM Knowledge Catalog side, but a database connection and Workflow Execution will appear in MANTA Automated Data Lineage Admin UI.

  12. Open MANTA Automated Data Lineage Admin UI:

    https://<CP4D_CLUSTER>/manta-admin-gui/app/index.html#/platform/connections
    
  13. Under Database, locate the connection to Microsoft SQL Server database that was created in step 11.

  14. Click Edit to change the following values:

    • Authentication type: NTLM
    • In the Username field, remove WKC_CONNECTION_PROPERTY string and replace it with the actual username, for example: Username: jsmith1.
    • Replace the password with the actual value.
    • Fill in the domain name with the actual value.
  15. Save the connection details.

  16. Navigate to Process Manager tab.

  17. Under Workflow history, ensure that you see the workflow ID, for example: e3c71b17-b203-4e26-a94d-84634fabe0f5_lineage_Workflow, which you executed in step 11. with a Failed status.

  18. Under Custom workflows section, locate the failed workflow and open it.

  19. Click Execute this Workflow to rerun the failed workflow.

Dummy assets get created for any file assets that come from Amazon S3 to show the complete business data lineage if Get ETL job lineage is performed

Applies to: 5.0.0

If you perform Get ETL job lineage import involving Amazon S3 connection, dummy assets get created for any file assets that come from Amazon S3 connection to show the complete business data lineage. If you perform metadata import for the same Amazon S3 connection, a duplicate asset will get created for the dummy asset created from Get ETL job lineage import and a valid asset discovered during the metadata import.

Metadata enrichment issues

You might encounter these known issues when you work with metadata enrichment.

Running primary key or relations analysis doesn't update the enrichment and review statuses

Applies to: 5.0.0 and later

The enrichment status is set or updated when you run a metadata enrichment with the configured enrichment options (Profile data, Analyze quality, Assign terms). However, the enrichment status is not updated when you run a primary key analysis or a relationship analysis. In addition, the review status does not change from Reviewed to Reanalyzed after review if new keys or relationships were identified.

In environments upgraded from version 4.7.0, you can't filter relationships in the enrichment results by assigned primary keys

Applies to: 5.0.0 and later

Starting in Cloud Pak for Data 4.7.1, you can use the Primary key filter in the key relationships view of the enrichment results to see only key relationships with an assigned primary key. This information is not available in upgrade environments if you upgraded from version 4.7.0. Therefore, the filter doesn't work as expected.

Workaround: To generate the required information, you can rerun primary key analysis or update primary key assignments manually.

Writing metadata enrichment output to an earlier version of Apache Hive than 3.0.0

Applies to: 5.0.0 and later

If you want to write data quality output generated by metadata enrichment to an Apache Hive database at an earlier software version than 3.0.0, set the following configuration parameters in your Apache Hive Server:

set hive.support.concurrency=true;
set hive.exec.dynamic.partition.mode=nonstrict;
set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
set hive.enforce.bucketing=true;   # not required for version 2

set hive.compactor.initiator.on=true;
set hive.compactor.cleaner.on=true;   # might not be available depending on the version
set hive.compactor.worker.threads=1;

For more information, see Hive Transactions.

Issues with the Microsoft Excel add-in

Applies to: 5.0.0 and later

The following issues are known for the Review metadata add-in for Microsoft Excel:

  • When you open the drop-down list to assign a business term or a data class, the entry Distinctive name is displayed as the first entry. If you select this entry, it shows up in the column but does not have any effect.

  • Updating or overwriting existing data in a spreadsheet is currently not supported. You must use an empty template file whenever you retrieve data.

  • If another user works on the metadata enrichment results while you are editing the spreadsheet, the other user's changes can get lost when you upload the changes that you made in the spreadsheet.

  • Only assigned data classes and business terms are copied from the spreadsheet columns Assigned / suggested data classes and Assigned / suggested business terms to the corresponding entry columns. If multiple business terms are assigned, each one is copied to a separate column.

Column customization in metadata enrichment results seems to be reverted

Applies to: 5.0.0
Fixed in: 5.0.1

As an IBM Knowledge Catalog Standard or IBM Knowledge Catalog Premium user, you can customize the layout of the results table in metadata enrichment. However, it can happen that the results table is shown without your customizations.

Workaround: Refresh the web page.

The controls for launching the relationship explorer are always shown

Applies to: 5.0.0
Fixed in: 5.0.1

On the metadata enrichment results page, the button and menu entry for launching the relationship explorer are shown even if the relationship explorer is not available because the knowledge graph feature is not installed or the user doesn't have the required permissions.

When you set the metadata enrichment scope, the Select all option doesn't work for the data assets in a metadata import asset

Applies to: 5.0.0
Fixed in: 5.0.1

When you create or edit the data scope for a metadata enrichment and select the Data assets checkbox to include all assets from a metadata import, the selection is not updated.

Workaround: Select all data assets individually or select the metadata import.

The Data quality tab of an asset imported from a different project might incorrectly show the option to open an associated metadata enrichment

Applies to: 5.0.0, 5.0.1, and 5.0.2
Fixed in: 5.0.3

If you export a data asset that is part of a metadata enrichment with just its connection and import that to a different project, the asset's Data quality in the new project incorrectly shows the option to open the associated metadata enrichment. Clicking the Open metadata enrichment button opens the metadata enrichment UI and the following error message is displayed:

Error: The asset with an ID of undefined doesn't exist.

Workaround: Create a metadata enrichment for this asset in the new project before you start working with the Data quality tab.

Advanced profiling gets stuck in running mode if an asset in the scope contains columns with duplicate names

Applies to: 5.0.0 and 5.0.1
Fixed in: 5.0.2

Advanced profiling can't process data assets that contain columns with duplicate names. For such assets, the status remains In progress. Consequently, the advanced profiling job is stuck in running mode.

Workaround: To work around the issue, you have these options:

  • Update the column names in the data asset to ensure that column names are unique within the asset.
  • Remove the data asset in question from the scope of the advanced profiling job or the metadata enrichment in general.

In some cases, metadata enrichment jobs can't be paused

Applies to: 5.0.1
Fixed in: 5.0.2

In some cases, it might not be possible to pause a job run for a metadata enrichment that is configured with the Set relationships enrichment option. Instead, you might see the message Run could not be paused. This can happen if key analysis is the only processing that is still in progress.

Republishing doesn't update primary key information in catalog

Applies to: 5.0.1 and later

If you remove primary key information from a data asset that initially was published with the primary key information to a catalog with the duplicate-asset handling method Overwrite original assets in the metadata enrichment results and then republish the asset to that catalog, the primary key information on the catalog asset remains intact.

Workaround: Delete the existing catalog asset before you republish the data asset from the metadata enrichment results.

Automatically assigned classifications for an asset aren't published

Applies to: 5.0.2
Fixed in: 5.0.3

When you publish a data asset with classifications assigned to the data asset and to columns, all classifications for columns and manually assigned classifications for the asset are published. Classification that were automatically assigned to the data asset are not published.

Workaround: Assign the respective classifications manually to the catalog asset.

In some cases, the Create data quality definition option might open an incorrect UI

Applies to: 5.0.2
Fixed in: 5.0.3

When you select the Create data quality definition option from the Create data quality check menu in a data quality page that you accessed from the metadata enrichment results, the Create data quality rule UI is displayed.

Workaround: You have these options:

  • Open the data asset, go to the Data quality page, and select Create data quality definition from the Create data quality check menu.
  • On the Assets page, click New asset > Define how to measure data quality.

Publishing SQL-based data assets from the metadata enrichment results isn't blocked

Applies to: 5.0.1 and later

Although publishing SQL-based dynamic view to catalogs is not supported, the action is not blocked in the metadata enrichment results. If you publish a dynamic view from the metadata enrichment results, the asset becomes available in the catalog with read-only SQL information.

After manual updates, Assigned by information for display names or descriptions might be incorrect

Applies to: 5.0.2 and later

When you edit the display name or description for a data asset or a column, the Assigned by information might incorrectly show Assigned by system or Assigned by generative AI.

Workaround: To see the correct Assigned by user information, refresh the results table after editing the display name or the description. Then, select the data asset or column for which you changed the values.

Asset data quality score might be shown for columns

Applies to: 5.0.2 and later

When you open the data quality details for a column from the metadata enrichment results, the asset score is shown instead of the data quality score for the selected column.

Workaround: To see the data quality score for a column, you have these alternative options:

  • From the enrichment results, open the data quality view for the asset that contains the column in which you are interested. In the Data quality checks section, switch to the Columns view and click the column to see its data quality information.
  • Open the asset that contains the column in which you are interested and go the the Data quality page. In the Data quality checks section, switch to the Columns view and click the column to see its data quality information.

Profiling results can be viewed in projects for ungoverned objects

Applies to: 5.0.3

To prevent unexpected exposure to value distributions through the profiling results of a view, all users are denied access to profiling results in Data Virtualization views in all catalogs and projects. However, if a view is ungoverned (not published or added to a governed catalog), its profiling results can be viewed by all project collaborators who are authorized to access the catalog asset, even if they are not authorized to query the view.

Workaround: Publish the view to a governed catalog to make it subject to governance.

Data quality issues

You might encounter these known issues when you work with data quality assets.

Rules with multiple joins might return incorrect results for data assets from Apache Cassandra, Apache Hive, MongoDB, or Oracle data sources

Applies to: 5.0.0 and later

A data quality rule that is created from one or more data quality definitions and contains multiple joins might return incorrect results when is is run on data assets from Apache Cassandra, Apache Hive, MongoDB, or Oracle data sources that are connected through a Generic JDBC connection.

Workaround: Use the respective native connector.

Rules bound to columns of the data type NUMERIC in data assets from Oracle data sources might not work

Applies to: 5.0.0 and later

Testing or running a data quality rule that is bound to a NUMERIC column in a data asset from an Oracle data source fails if the data source is connected through a Generic JDBC connection.

Workaround: Use the native connector.

Runs of migrated data quality rules complete with warnings

Applies to: 5.0.0 and later

When you run a data quality rule that was migrated from the legacy data quality feature or from InfoSphere Information Server, you might see the message Run successful with warnings.

Workaround: None. You can ignore such warnings.

SQL-based rules return a data quality score of 0%

Applies to: 5.0.0 and 5.0.1
Fixed in: 5.0.2

When you manually assign a column to an SQL-based data quality rule by using the Validates the data quality of relationship, the rule always returns a data quality score of 0%.

Creating a data quality rule might take you to an unsupported experience

Applies to: 5.0.1 and 5.0.2
Fixed in: 5.0.3

If your environment includes multiple solutions, you can switch between experiences to access specific features. In some cases, you might be taken to an experience that doesn't support data quality features. For example, if you create a data quality definition from the Data quality page in the metadata enrichment results and then directly create a data quality rule from that definition, you are redirected to an experience without access to these features, which causes an error. However, the rule is still created.

Workaround: To access the new rule:

  1. Manually switch to the Cloud Pak for Data experience by clicking the Switch location icon in the toolbar.
  2. Go to the project in which you worked initially.

As an alternative, use the New asset option in the project to create data quality definitions and rules.

MANTA Automated Data Lineage

You might encounter these known issues and restrictions when MANTA Automated Data Lineage is used for capturing lineage.

Metadata import jobs for getting lineage might take very long to complete

Applies to: 5.0

If multiple lineage scans are requested at the same time, the corresponding metadata import jobs for getting lineage might take very long to complete. This is due to the fact that MANTA Automated Data Lineage workflows can't run in parallel but are executed sequentially.

Chrome security warning for Cloud Pak for Data deployments where MANTA Automated Data Lineage for IBM Cloud Pak for Data is enabled

Applies to: 4.8.0 and later

When you try to access a Cloud Pak for Data cluster that has MANTA Automated Data Lineage for IBM Cloud Pak for Data enabled from the Chrome web browser, the message Your connection is not private is displayed and you can't proceed. This is due to MANTA Automated Data Lineage for IBM Cloud Pak for Data requiring an SSL certificate to be applied and occurs only if a self-signed certificate is used.

Workaround: To bypass the warning for the remainder of the browser session, type thisisunsafe anywhere on the window. Note that this code changes every now and then. The mentioned code is valid as of the date of general availability of Cloud Pak for Data 4.6.0. You can search the web for the updated code if necessary.

Columns are displayed as numbers for a DataStage job lineage in the catalog

Applies to: 5.0.0 and later

The columns for a lineage that was imported from a DataStage job are not displayed correctly in the catalog. Instead of column names, column numbers are displayed. The issue occurs when the source or target of a lineage is a CSV file.

MANTA Automated Data Lineage will not function properly on IBM Knowledge Catalog Standard

Applies to: 5.0.0 and 5.0.1

If you install MANTA Automated Data Lineage when you have IBM Knowledge Catalog Standard installed as the prerequisite, MANTA will not function properly.

If you want to install MANTA, you will need to have IBM Knowledge Catalog Premium installed.

Not all stages are displayed in technical data lineage graph for the imported DataStage ETL flow

Applies to: 5.0.0 and later

When you import a DataStage ETL flow and view it in the technical data lineage graph, only three stages are displayed, even when four stages were imported.

Workaround: By default, three connected elements are displayed in the graph. To display more elements, click the expand icon on the last or the first displayed element on the graph.

Lineage issues

You might encounter these known issues and restrictions with lineage.

Lineage metadata don’t show on Knowledge Graph after upgrading

Applies to: 5.0.0 and later

After upgrading to 4.7.2, an unknown error appears on the lineage tab.

Workaround: To start seeing the Knowledge Graph, you need to resync catalogs' metadata, see Resync of lineage metadata.

Business data lineage is incomplete for the metadata imports with Get ETL job lineage or Get BI report lineage goals

Applies to: 5.0.0 and later

In some cases, when you display business lineage between databases and ETL jobs or BI reports, some assets are missing, for example, a starting database. The data was imported by using the Get ETL job lineage or Get BI report lineage import option. Technical data lineage correctly shows all assets.

Workaround: Sometimes MANTA Automated Data Lineage cannot map the connection information from an ETL job or a BI report to the existing connections in IBM Knowledge Catalog. Follow these steps to solve the issue:

  1. Open the MANTA Automated Data Lineage Admin UI:

    https://<CPD-HOSTNAME>/manta-admin-gui/
    
  2. Go to Log Viewer and from the Source filter select Workflow Execution.

  3. From the Workflow Execution filter, select the name of the lineage workflow that is associated with the incomplete business lineage.

  4. Look for the dictionary_manta_mapping_errors issue category and expand it.

  5. In each entry, expand the error and click View Log Details.

  6. In each error details, look for the value of connectionString. For example, in the following error message, the value of the connectionString parameter is DQ DB2 PX.

    2023/11/14 18:40:12.186 PM [CLI] WARN - <provider-name> [Context: [DS Job 2_PARAMETER_SET] flow in project [ede1ab09-4cc9-4a3f-87fa-8ba1ea2dc0d8_lineage]]
    DICTIONARY_MANTA_MAPPING_ERRORS - NO_MAPPING_FOR_CONNECTION
    User message: Connection in use could not be automatically mapped to one of the database connections configured in MANTA.
    Technical message: There is no mapping for the connection Connection [type=DB2, connectionString=DQ DB2 PX, serverName=dataquack.ddns.net, databaseName=cpd, schemaName=null, userName=db2inst1].
    Solution: Identify the particular database technology DB2 leading to "DQ DB2 PX" and configure it as a new connection or configure the manual mapping for that database technology in MANTA Admin UI.
    Lineage impact: SINGLE_INPUT
    
  7. Depending on the connection that you used for the metadata import, go to Configuration > CLI > connection server > connection server Alias Mapping, for example DB2 > DB2 Alias Mapping.

  8. Select the connection used in workflow and click Full override.

  9. In the Connection ID field, add the value of the connectionString parameter that you found in the error details, for example DQ DB2 PX.

  10. Rerun the metadata import job in IBM Knowledge Catalog.

Data integration assets show columns

Applies to: 5.0.0, 5.0.1, and 5.0.2
Fixed in: 5.0.3

When selecting Show Columns on a data asset connected with a data integration asset, columns might connect with the data integration job asset’s columns that should not be shown.

Workaround: Before selecting Show Columns on a data asset connected with a data integration asset, expand the data integration asset first.

Clicking Expand and Collapse generates additional edges in the lineage graph

Applies to: 5.0.1
Fixed in: 5.0.2

When working with the lineage graph, additional edges are showing every time you click Expand and Collapse.

Hops for components and columns of components work only inside the data integration flow area of an expanded job node

Applies to: 5.0.1 and later

When working with the lineage graph, hops for components and columns of data integration components work only inside the data integration flow area of an expanded job node and don't connect columns of nodes outside of the flow area.

Multiple saved filters with the same name

Applies to: 5.0.2 and later

The name for saved filters with different option chosen can have the same name. This might result in having multiple saved filters with the same name but different filtering options.

Workaround: It is recommended to name each saved filter differently to easily distinguish them.

Relationship explorer

You might encounter these known issues and restrictions with relationship explorer.

Related categories show at the bottom of the graph

Applies to: 5.0.0 and later

When switching to vertical view, the related category is at the bottom of the graph.

Assets can’t be found when exploring relationships

Applies to: 5.0.0 and 5.0.1
Fixed in: 5.0.2

Assets added from a catalog to a project will not be visible in Relationship Explorer. An error message might occur that the selected asset does not exist.

Reporting issues

You might encounter these known issues and restrictions with BI reporting.

Reporting setup page does not refresh when settings are changed for a project

Applies to: 5.0.0
Fixed in: 5.0.1

When you enable or disable reporting with the toggle in the Projects table on Reporting setup page, the updated setting does not refresh in the table even though a successful message is displayed.

Workaround: Refresh the table to view the updated setting.

Unlocking personal vaulted connection for reporting fails

Applies to: 5.0.0, 5.0.1, and 5.0.2
Fixed in: 5.0.3

When you try to unlock a personal vaulted connection from the Reporting setup page, the error is displayed even if you fill in all required fields.

Workaround: Fill in all the fields in the Unlock connection window, even if they are not required. Alternatively, go to Data > Connectivity and provide all the required information for the connection you want to use.

Limitations

Catalogs and projects

Long names of the asset owners get truncated when hovering over their avatars

Applies to: 5.0.3 and later

When you are hovering over the avatar to show the long name of the asset owner in the side panel, the name gets truncated if it is longer than 40 characters or contains a space or a special character. If the name is longer than 40 characters, it will display correctly as long as it contains a space or '-' within the first 40 characters.

Can't add individual group members as asset members

Applies to: 5.0.0 and later

You can't add individual group members as asset members. You can add individual group members as catalog collaborators and then as asset members.

Catalog asset search doesn't support special characters

Applies to: 5.0.0 and later

If search keywords contain any of the following special characters, the search filter doesn't return the most accurate results.

Search keywords:

. + - && || ! ( ) { } [ ] ^ " ~ * ? : \

Workaround: To obtain the most accurate results, search only for the keyword after the special character. For example, instead of AUTO_DV1.SF_CUSTOMER, search for SF_CUSTOMER.

Missing default catalog and predefined data classes

Applies to: 5.0.0 and later

The automatic creation of the default catalog after installation of the IBM Knowledge Catalog service can fail. If it does, the predefined data classes are not automatically loaded and published as governance artifacts.

Workaround: Ask someone with the Administrator role to follow the instructions for creating the default catalog manually.

Special or double-byte characters in the data asset name are truncated on download

Applies to: 5.0

When you download a data asset with a name that contains special or double-byte characters from a catalog, these characters might be truncated from the name. For example, a data asset named special chars!&@$()テニス.csv will be downloaded as specialchars!().csv.

The following character sets are supported:

  • Alphanumeric characters: 0-9, a-z, A-Z
  • Special characters: ! - _ . * ' ( )

Catalog UI does not update when changes are made to the asset metadata

Applies to: 5.0

If the Catalog UI is open in a browser while an update is made to the asset metadata, the Catalog UI page will not automatically update to reflect this change. Outdated information will continue to be displayed, causing external processes to produce incorrect information.

Workaround: After the asset metadata is updated, refresh the Catalog UI page at the browser level.

A blank page might be rendered when you search for terms while manually assigning terms to a catalog asset

Applies to: 5.0

When you search for a term to assign to a catalog asset and change that term while the search is running, it can happen that a blank page is shown instead of any search results.

Workaround: Rerun the search.

Governance artifacts

Cannot use CSV to move data class between Cloud Pak for Data instances

Applies to: 5.0.0 and later

If you try to export data classes with matching method Match to reference data to CSV, and then import it into another Cloud Pak for Data instance, the import fails.

Workaround: For moving governance artifact data from one instance to another, especially data classes of this matching method, use the ZIP format export and import. For more information about the import methods, see Import methods for governance artifacts.

Masked data is not supported in data visualizations

Applies to: 5.0.0 and later

Masked data is not supported in data visualizations. If you attempt to work with masked data while generating a chart in the Visualizations tab of a data asset in a project the following error message is received: Bad Request: Failed to retrieve data from server. Masked data is not supported.

Metadata import

Metadata import jobs might be stuck due to issues related to RabbitMQ

Applies to: 5.0.0 and later

If the metadata-discovery pod starts before the rabbitmq pods are up after a cluster reboot, metadata import jobs can get stuck while attempting to get the job run logs.

Workaround: To fix the issue, complete the following steps:

  1. Log in to the OpenShift console by using admin credentials.
  2. Go to Workloads > Pods.
  3. Search for rabbitmq.
  4. Delete the rabbitmq-0, rabbitmq-1, and rabbitmq-2 pods. Wait for the pods to be back up and running.
  5. Search for discovery.
  6. Delete the metadata-discovery pod. Wait for the pod to be back up and running.
  7. Rerun the metadata import job.

Data assets might not be imported when running an ETL job lineage import for DataStage flows

Applies to: 5.0.0 and later

When you create and run a metadata import with the goal Get ETL job lineage where the scope is determined by the Select all DataStage flows and their dependencies in the project option, data assets from the connections associated with the DataStage flows are not imported.

Workaround: Explicitly select all DataStage flows and connections when you set the scope instead of using the Select all DataStage flows and their dependencies in the project option.

Metadata enrichment

In some cases, you might not see the full log of a metadata enrichment job run in the UI

Applies to: 5.0.0 and later

If the list of errors in a metadata enrichment run is exceptionally long, only part of the job log might be displayed in the UI.

Workaround: Download the entire log and analyze it in an external editor.

Schema information might be missing when you filter enrichment results

Applies to: 5.0.0 and later

When you filter assets or columns in the enrichment results on source information, schema information might not be available.

Workaround: Rerun the enrichment job and apply the Source filter again.

Profiling in catalogs, projects, and metadata enrichment might fail for Teradata connections

Applies to: 5.0.0 and later

If a Generic JDBC connection for Teradata exists with a driver version before 17.20.00.15, profiling in catalogs and projects, and metadata enrichment of data assets from a Teradata connection fails with an error message similar to the following one:

2023-02-15T22:51:02.744Z - cfc74cfa-db47-48e1-89f5-e64865a88304 [P] ("CUSTOMERS") - com.ibm.connect.api.SCAPIException: CDICO0100E: Connection failed: SQL error: [Teradata JDBC Driver] [TeraJDBC 16.20.00.06] [Error 1536] [SQLState HY000] Invalid connection parameter name SSLMODE (error code: DATA_IO_ERROR)

Workaround: For this workaround, users must be enabled to upload or remove JDBC drivers. For more information, see Enable users to upload, delete, or view JDBC drivers.

Complete these steps:

  1. Go to Data > Connectivity > JDBC drivers and delete the existing JAR file for Teradata (terajdbc4.jar).
  2. Edit the Generic JDBC connection, remove the selected JAR files, and add SSLMODE=ALLOW to the JDBC URL.

For assets from SAP OData sources, the metadata enrichment results do not show the table type

Applies to: 5.0.0 and later

In general, metadata enrichment results show for each enriched data asset whether the asset is a table or a view. This information cannot be retrieved for data assets from SAP OData data sources and is thus not shown in the enrichment results.

Data quality

Rules run on columns of type timestamp with timezone fail

Applies to: 5.0.0 and later

The data type timestamp with timezone is not supported. You can't apply data quality rules to columns with that data type.

Rules fail because the job's warning limit is exceeded

Applies to: 5.0.0 and later

For some rules, the associated DataStage job fails because the warning limit is reached. The following error message is written to the job log:

Warning limit 100 for the job has been reached, failing the job.

The default limit for jobs associated with data quality rules is 100.

Workaround: Edit the configuration of the DataStage job and set the warning limit to 1,000. Then, rerun the job.

Lineage

An unnecessary edge appears when expanding data integration assets

Applies to: 5.0.0 and later

After expanding a data integration asset and clicking Show next or Show all, the transformer nodes will have an unnecessary edge that points to themselves.

Parent topic: Known issues and limitations in Cloud Pak for Data