Troubleshooting
Problem
THE generate_postmortem.sh TOOL HAS BEEN DEPRECATED. PLEASE DO NOT USE THIS DOCUMENT UNLESS YOU HAVE AN ISSUE USING THE apic-mustgather PYTHON SCRIPT. THE MUSTGATHER BASED UPON apic-mustgather PYTHON SCRIPT CAN BE FOUND here.
Resolving The Problem
Each section in this document contains the instructions to collect the MustGather data for IBM API Connect on the associated subsystem. This data is required by IBM Support to effectively diagnose and resolve issues.
- Migration Issue: v5 to v10
- Migration Issue: v2018 to v10.x
- Migration Issue: v10 to v10.x
- Installation or Upgrade Issue (all subsystems): VMware deployment
- Installation or Upgrade Issue (all subsystems): OpenShift, IBM Cloud Pak for Integration, or Kubernetes deployment
- Management subsystem
- Developer Portal subsystem
- Analytics subsystem
- Gateway subsystem: Native Kubernetes deployment
- Gateway subsystem: OpenShift or IBM Cloud Pak for Integration deployment
- Gateway subsystem: VMware deployment or physical appliance
Migration Issue: v5 to v10
- Upload the following to the case:
- The v5 dbextract file that was produced with the following v5 command:
config dbextract sftp <host_name>
user <user_name> file <path/name> - .zip file of the migration utility logs directory
- Command that generated the specific errors that were observed
- Screen captures of specific errors observed
- OPTIONAL: If the issue occurred with the port-to-APIGW or push command, also upload the zipped cloud folder that was being used in the command
- The v5 dbextract file that was produced with the following v5 command:
Migration Issue: v2018 to v10.x (OVA)
- If the postmortem tool has not yet been installed or has not been updated in the past month, download and install the generate_postmortem.sh and crunchy_gather.py or edb_gather.sh script.
NOTE: Here is a link to the deprecated README for generate_postmortem.sh
NOTE: For API Connect v10.0.7 or later, download edb_gather.sh, for previous versions download crunchy_gather.py - Run the postmortem tool for failing subsystem (management, portal, or analytics) and note the location where the postmortem output file is saved:
- Via SSH, connect to the target appliance
- Change to the directory where the generate_postmortem.sh script was downloaded to the appliance
- Switch to the root user
sudo -i
- Execute the following command:
- For an issue on the Portal subsystem:
./generate_postmortem.sh --ova --pull-appliance-logs --diagnostic-portal
- For an issue on the Analytics subsystem:
./generate_postmortem.sh --ova --pull-appliance-logs --diagnostic-analytics
- For an issue on the Management subsystem:
./generate_postmortem.sh --ova --pull-appliance-logs --diagnostic-manager
- For an issue on the Portal subsystem:
- Additionally for Management subsystem when load python script succeeds:
- Take backup of the upgrade PVC data and logs directory, as it contains details of any orphaned record deletions and other logs.
- Upload the following to the case:
- apiconnect-logs-*.tgz/zip file that was generated from the postmortem command
- When issue is with the Management subsystem and the extract/load python script fails:
- Syslogs from each v2018 node in the cluster (these will contain information on the extract job)
- Syslogs from each v10 node in the cluster (these will contain information on the load job)
- Extract pod logs from v2018
- Load pod logs from v10
- In case of load job fail, the extracted zip file which is given as input for load script
- The python script outputs the location of pv logs. Save the "data" and "logs" directory present in this location. The data directory contains the csv files extracted from v2018. The logs directory contains the logs and status of the extract step. The logs directory and if required some csv file related info need to be shared.
- Output of python script error if any.
Migration Issue: v10 to v10.x (OVA)
- If the postmortem tool has not yet been installed or has not been updated in the past month, download and install the generate_postmortem.sh and crunchy_gather.py or edb_gather.sh script.
NOTE: Here is a link to the deprecated README for generate_postmortem.sh
NOTE: For API Connect v10.0.7 or later, download edb_gather.sh, for previous versions download crunchy_gather.py - Run the postmortem tool for failing subsystem (management, portal, or analytics) and note the location where the postmortem output file is saved:
- Via SSH, connect to the target appliance
- Change to the directory where the generate_postmortem.sh script was downloaded to the appliance
- Switch to the root user
sudo -i
- Execute the following command:
- For an issue on the Management subsystem:
./generate_postmortem.sh --ova --pull-appliance-logs --diagnostic-manager
- For an issue on the Portal subsystem:
./generate_postmortem.sh --ova --pull-appliance-logs --diagnostic-portal
- For an issue on the Analytics subsystem:
./generate_postmortem.sh --ova --pull-appliance-logs --diagnostic-analytics
- For an issue on the Management subsystem:
- Upload the following to the case:
- apiconnect-logs-*.tgz/zip file that was generated from the postmortem command
- Screen capture or output of Python script failing.
- Archive of the project directory (folder)
Installation or Upgrade Issue (all subsystems): VMware deployment
- If the postmortem tool has not yet been installed or has not been updated in the past month, download and install the generate_postmortem.sh and crunchy_gather.py or edb_gather.sh script.
NOTE: Here is a link to the deprecated README for generate_postmortem.sh
NOTE: For API Connect v10.0.7 or later, download edb_gather.sh, for previous versions download crunchy_gather.py - Run the postmortem tool and note the location where the postmortem output file is saved:
- Via SSH, connect to the target appliance encountering the issue
- Switch to the root user
sudo -i
- Change to the directory where the generate_postmortem.sh script was downloaded to the appliance
- Execute the following command:
./generate_postmortem.sh --ova --pull-appliance-logs
- As root user, gather the status of apic:
apic status > apic_status.out
- As root user, gather the version of apic:
apic version > apic_version.out
- Upload the following to the case:
- Any error messages received from running the
apicup subsys install
command - apiconnect-logs-*.tgz/.zip file that was generated from the postmortem command
- apic_status.out
- apic_version.out
- Archive file of the apicup project directory
- Any error messages received from running the
Installation or Upgrade Issue (all subsystems): OpenShift, IBM Cloud Pak for Integration, or Kubernetes deployment
- If the postmortem tool has not yet been installed or has not been updated in the past month, download and install the generate_postmortem.sh and crunchy_gather.py or edb_gather.sh script.
NOTE: Here is a link to the deprecated README for generate_postmortem.sh
NOTE: For API Connect v10.0.7 or later, download edb_gather.sh, for previous versions download crunchy_gather.py - Run the postmortem tool and note the location where the postmortem output file is saved:
- OpenShift or IBM Cloud Pak for Integration deployment:
- From a command prompt on a Unix based node that has access to the Kubernetes cluster
- Change to the directory where the generate_postmortem.sh script was downloaded to the node
- Execute the following command and replace APIC_NAMESPACE with the value(s) for the API Connect namespace:
NOTE: If there are multiple API Connect namespaces, separate each namespace with a comma. --extra-namespaces=openshift-operators,dev1,dev2,dev3- If deployed through CP4I
./generate_postmortem.sh --diagnostic-all --extra-namespaces=ibm-common-services,openshift-operators,APIC_NAMESPACE
- If NOT deployed through CP4I
./generate_postmortem.sh --diagnostic-all --extra-namespaces=openshift-operators,APIC_NAMESPACE
- If deployed through CP4I
- Native Kubernetes deployment:
- From a command prompt on a Unix based node that has access to the Kubernetes cluster
- Change to the directory where the generate_postmortem.sh script was downloaded to the node
- Execute the following command and replace APIC_NAMESPACE with the value(s) for the API Connect namespace:
NOTE: If there are multiple API Connect namespaces, separate each namespace with a comma. For example: --extra-namespaces=dev1,dev2,dev3./generate_postmortem.sh --diagnostic-manager --extra-namespaces=APIC_NAMESPACE
- OpenShift or IBM Cloud Pak for Integration deployment:
- Upload the following to the case:
- apiconnect-logs-*.tgz/zip file that was generated from the postmortem command
- If encountering certificate related errors post-install:, the secrets.yaml generated via the following command:
kubectl get secrets -n NAMESPACE -o yaml > secrets.yaml
Management subsystem
- If the postmortem tool has not yet been installed or has not been updated in the past month, download and install the generate_postmortem.sh and crunchy_gather.py or edb_gather.sh script.
NOTE: Here is a link to the deprecated README for generate_postmortem.sh
NOTE: For API Connect v10.0.7 or later, download edb_gather.sh, for previous versions download crunchy_gather.py - Reproduce the problem
- Run the postmortem tool and note the location where the postmortem output file is saved:
- OVA deployment:
- Via SSH, connect to the target appliance
- Change to the directory where the generate_postmortem.sh script was downloaded to the appliance
- Switch to the root user
sudo -i
- Execute the following command:
./generate_postmortem.sh --ova --pull-appliance-logs --diagnostic-manager
- OpenShift or IBM Cloud Pak for Integration deployment:
- From a command prompt on a Unix based node that has access to the Kubernetes cluster
- Change to the directory where the generate_postmortem.sh script was downloaded to the node
- Execute the following command and replace APIC_NAMESPACE with the value(s) for the API Connect namespace:
NOTE: If there are multiple API Connect namespaces, separate each namespace with a comma. --extra-namespaces=openshift-operators,dev1,dev2,dev3- If deployed through CP4I
./generate_postmortem.sh --diagnostic-manager --extra-namespaces=ibm-common-services,openshift-operators,APIC_NAMESPACE
- If NOT deployed through CP4I
./generate_postmortem.sh --diagnostic-manager --extra-namespaces=openshift-operators,APIC_NAMESPACE
- If deployed through CP4I
- Native Kubernetes deployment:
- From a command prompt on a Unix based node that has access to the Kubernetes cluster
- Change to the directory where the generate_postmortem.sh script was downloaded to the node
- Execute the following command and replace APIC_NAMESPACE with the value(s) for the API Connect namespace:
NOTE: If there are multiple API Connect namespaces, separate each namespace with a comma. For example: --extra-namespaces=dev1,dev2,dev3./generate_postmortem.sh --diagnostic-manager --extra-namespaces=APIC_NAMESPACE
- OVA deployment:
- Upload the following to the case:
- apiconnect-logs-*.tgz/zip file that was generated from the postmortem command
- Steps to reproduce the problem
- Time that the error occurred or start/stop time of reproducing the error
- Screen capture of error (if applicable)
Developer Portal subsystem
- If the postmortem tool has not yet been installed or has not been updated in the past month, download and install the generate_postmortem.sh script.
NOTE: Here is a link to the deprecated README for generate_postmortem.sh - Reproduce the problem
- Run the postmortem tool and note the location where the postmortem output file is saved:
- OVA deployment:
- Via SSH, connect to the target appliance
- Change to the directory where the generate_postmortem.sh script was downloaded to the appliance
- Switch to the root user
sudo -i
- Execute the following command:
./generate_postmortem.sh --ova --pull-appliance-logs --diagnostic-portal
- OpenShift or IBM Cloud Pak for Integration deployment:
- From a command prompt on a Unix based node that has access to the Kubernetes cluster
- Change to the directory where the generate_postmortem.sh script was downloaded to the node
- Execute the following command and replace APIC_NAMESPACE with the value(s) for the API Connect namespace:
NOTE: If there are multiple API Connect namespaces, separate each namespace with a comma. --extra-namespaces=openshift-operators,dev1,dev2,dev3- If deployed through CP4I
./generate_postmortem.sh --diagnostic-portal --extra-namespaces=ibm-common-services,openshift-operators,APIC_NAMESPACE
- If NOT deployed through CP4I
./generate_postmortem.sh --diagnostic-portal --extra-namespaces=openshift-operators,APIC_NAMESPACE
- If deployed through CP4I
- Native Kubernetes deployment:
- From a command prompt on a Unix based node that has access to the Kubernetes cluster
- Change to the directory where the generate_postmortem.sh script was downloaded to the node
- Execute the following command and replace APIC_NAMESPACE with the value(s) for the API Connect namespace:
NOTE: If there are multiple API Connect namespaces, separate each namespace with a comma. For example: --extra-namespaces=dev1,dev2,dev3./generate_postmortem.sh --diagnostic-portal --extra-namespaces=APIC_NAMESPACE
- OVA deployment:
- Upload the following to the case:
- apiconnect-logs-*.tgz/zip file that was generated from the postmortem command
- Steps to reproduce the problem
- Time that the error occurred or start/stop time of reproducing the error
- Screen capture of error (if applicable)
Analytics subsystem
- If the postmortem tool has not yet been installed or has not been updated in the past month, download and install the generate_postmortem.sh script.
NOTE: Here is a link to the deprecated README for generate_postmortem.sh - Reproduce the problem
- Run the postmortem tool and note the location where the postmortem output file is saved:
- OVA deployment:
- Via SSH, connect to the target appliance
- Change to the directory where the generate_postmortem.sh script was downloaded to the appliance
- Switch to the root user
sudo -i
- Execute the following command:
./generate_postmortem.sh --ova --pull-appliance-logs --diagnostic-analytics
- OpenShift or IBM Cloud Pak for Integration deployment:
- From a command prompt on a Unix based node that has access to the Kubernetes cluster
- Change to the directory where the generate_postmortem.sh script was downloaded to the node
- Execute the following command and replace APIC_NAMESPACE with the value(s) for the API Connect namespace:
NOTE: If there are multiple API Connect namespaces, separate each namespace with a comma. --extra-namespaces=openshift-operators,dev1,dev2,dev3- If deployed through CP4I
./generate_postmortem.sh --diagnostic-analytics --extra-namespaces=ibm-common-services,openshift-operators,APIC_NAMESPACE
- If NOT deployed through CP4I
./generate_postmortem.sh --diagnostic-analytics --extra-namespaces=openshift-operators,APIC_NAMESPACE
- If deployed through CP4I
- Native Kubernetes deployment:
- From a command prompt on a Unix based node that has access to the Kubernetes cluster
- Change to the directory where the generate_postmortem.sh script was downloaded to the node
- Execute the following command and replace APIC_NAMESPACE with the value(s) for the API Connect namespace:
NOTE: If there are multiple API Connect namespaces, separate each namespace with a comma. For example: --extra-namespaces=dev1,dev2,dev3./generate_postmortem.sh --diagnostic-analytics --extra-namespaces=APIC_NAMESPACE
- OVA deployment:
- Upload the following to the case:
- apiconnect-logs-*.tgz/zip file that was generated from the postmortem command
- Steps to reproduce the problem
- Time that the error occurred or start/stop time of reproducing the error
- Screen capture of error (if applicable)
Gateway subsystem: Native Kubernetes deployment
- Download and install:
- The generate_postmortem.sh and crunchy_gather.py or edb_gather.sh script.
NOTE: Here is a link to the deprecated README for generate_postmortem.sh
NOTE: For API Connect v10.0.7 or later, download edb_gather.sh, for previous versions download crunchy_gather.py - The latest apicops command-line interface
- Ensure that you follow the requirements section so that the tool will work correctly in your environment
- The generate_postmortem.sh and crunchy_gather.py or edb_gather.sh script.
- Reproduce the problem
- Run the
postmortem
andapicops
tools, and note the location where the postmortem output file is saved:- If the management and gateway subsystems are installed in the same Kubernetes cluster:
- From a command prompt on a Unix based node that has access to the Kubernetes cluster
- Change to the directory where the generate_postmortem.sh script was downloaded to the node
- Execute the following command and replace APIC_NAMESPACE with the value(s) for the API Connect namespace:
NOTE: If there are multiple API Connect namespaces, separate each namespace with a comma. For example: --extra-namespaces=dev1,dev2,dev3./generate_postmortem.sh --diagnostic-all --extra-namespaces=APIC_NAMESPACE
- Execute the following command:
NOTE: If the command returns an error, review the steps documented in the requirements section./apicops-linux debug:info
- If the management and gateway subsystems are installed in different Kubernetes clusters:
- On the node which the management subsystem is installed:
- From a command prompt on a Unix based node that has access to the Kubernetes cluster
- Change to the directory where the generate_postmortem.sh script was downloaded to the node
- Execute the following command and replace APIC_NAMESPACE with the value for the API Connect management namespace:
./generate_postmortem.sh --diagnostic-manager --extra-namespaces=APIC_NAMESPACE
- Execute the following command:
NOTE: If the command returns an error, review the steps documented in the requirements section./apicops-linux debug:info
- On the node which the gateway subsystem is installed:
- From a command prompt on a Unix based node that has access to the Kubernetes cluster
- Change to the directory where the generate_postmortem.sh script was downloaded to the node
- Execute the following command and replace APIC_NAMESPACE with the value for the API Connect gateway namespace:
./generate_postmortem.sh --diagnostic-gateway --extra-namespaces=APIC_NAMESPACE
- On the node which the management subsystem is installed:
- If the management and gateway subsystems are installed in the same Kubernetes cluster:
- Upload the following to the case:
- apiconnect-logs-*.tgz/zip files generated from the postmortem commands
- Output from apicops-linux command in step 3
- Time that the error occurred or start/stop time of reproducing the error
- For a specific API that is failing - Optional and in addition to steps 4.1 - 4.3
- For the API Gateway:
- files under temporary://
- related yaml files
- DataPower configuration for application domain
- Probe of the failing transaction, see Configuring the API probe
- For a v5-compatible gateway:
- related yaml files
- Probe of the failing transaction
- Export of the document cache for webapi and webapi-internal
- For the API Gateway:
Gateway Subsystem: OpenShift or IBM Cloud Pak for Integration deployment
- Download and install:
- The generate_postmortem.sh and crunchy_gather.py or edb_gather.sh script.
NOTE: Here is a link to the deprecated README for generate_postmortem.sh
NOTE: For API Connect v10.0.7 or later, download edb_gather.sh, for previous versions download crunchy_gather.py - The latest apicops command-line interface
- Ensure that you follow the requirements section so that the tool will work correctly in your environment
- The generate_postmortem.sh and crunchy_gather.py or edb_gather.sh script.
- Reproduce the problem
- Run the
postmortem
andapicops
tools, and note the location where the postmortem output file is saved:- If the management and gateway subsystems are installed in the same Kubernetes cluster:
- From a command prompt on a Unix based node that has access to the Kubernetes cluster
- Change to the directory where the generate_postmortem.sh script was downloaded to the node
- Execute the following command and replace APIC_NAMESPACE with the value(s) for the API Connect namespace:
NOTE: If there are multiple API Connect namespaces, separate each namespace with a comma. For example: --extra-namespaces=dev1,dev2,dev3- If deployed through CP4I
./generate_postmortem.sh --diagnostic-all --extra-namespaces=ibm-common-services,openshift-operators,APIC_NAMESPACE
- If NOT deployed through CP4I
./generate_postmortem.sh --diagnostic-all --extra-namespaces=openshift-operators,APIC_NAMESPACE
- If deployed through CP4I
- Execute the following command:
NOTE: If the command returns an error, review the steps documented in the requirements section./apicops-linux debug:info
- If the management and gateway subsystems are installed in different Kubernetes clusters:
- On the node which the management subsystem is installed:
- From a command prompt on a Unix based node that has access to the Kubernetes cluster
- Change to the directory where the generate_postmortem.sh script was downloaded to the node
- Execute the following command and replace APIC_NAMESPACE with the value for the API Connect management namespace:
- If deployed through CP4I
./generate_postmortem.sh --diagnostic-manager --extra-namespaces=ibm-common-services,openshift-operators,APIC_NAMESPACE
- If NOT deployed through CP4I
./generate_postmortem.sh --diagnostic-manager --extra-namespaces=openshift-operators,APIC_NAMESPACE
- If deployed through CP4I
- Execute the following command:
NOTE: If the command returns an error, review the steps documented in the requirements section./apicops-linux debug:info
- On the node which the gateway subsystem is installed:
- From a command prompt on a Unix based node that has access to the Kubernetes cluster
- Change to the directory where the generate_postmortem.sh script was downloaded to the node
- Execute the following command and replace APIC_NAMESPACE with the value for the API Connect gateway namespace:
- If deployed through CP4I
./generate_postmortem.sh --diagnostic-gateway --extra-namespaces=ibm-common-services,openshift-operators,APIC_NAMESPACE
- If NOT deployed through CP4I
./generate_postmortem.sh --diagnostic-gateway --extra-namespaces=openshift-operators,APIC_NAMESPACE
- If deployed through CP4I
- On the node which the management subsystem is installed:
- If the management and gateway subsystems are installed in the same Kubernetes cluster:
- Upload the following to the case:
- apiconnect-logs-*.tgz/zip files generated from the postmortem commands
- Output from apicops-linux command in step 3
- Time that the error occurred or start/stop time of reproducing the error
- For a specific API that is failing - Optional and in addition to steps 4.1 - 4.3
- For the API Gateway:
- files under temporary://
- related yaml files
- DataPower configuration for application domain
- Probe of the failing transaction, see Configuring the API probe
- For a v5-compatible gateway:
- related yaml files
- Probe of the failing transaction
- Export of the document cache for webapi and webapi-internal
- For the API Gateway:
Gateway subsystem: VMware deployment or physical appliance
- Via SSH, connect to the DataPower server
- Collect API Connect gateway service log data by configuring the following log target in the API Connect application domain using the CLI.
- Repeat this step for each gateway server in the cluster
sw <apiconnect domain>
configure terminal
logging target gwd-log
type file
format text
timestamp zulu
size 50000
local-file logtemp:///gwd-log.log
event apic-gw-service debug
exit
apic-gw-service;admin-state disabled;exit
apic-gw-service;admin-state enabled;exit
write mem
exit
- Repeat this step for each gateway server in the cluster
- OPTIONAL: Enable gateway-peering debug logs via the DataPower CLI.
- Repeat this step for each gateway server in the cluster and replace GW_PEERING_OBJECT_NAME with the correct name of the peering object:
NOTE: To determine the configured peering objects, issue the following command within the apiconnect domain: show gateway-peering-statussw <apiconnect domain>
diagnostics
gateway-peering-debug GW_PEERING_OBJECT_NAME
exit
- Repeat this step for each gateway server in the cluster and replace GW_PEERING_OBJECT_NAME with the correct name of the peering object:
- On the management subsystem, download and install:
- The generate_postmortem.sh and crunchy_gather.py or edb_gather.sh script.
NOTE: Here is a link to the deprecated README for generate_postmortem.sh
NOTE: For API Connect v10.0.7 or later, download edb_gather.sh, for previous versions download crunchy_gather.py - The latest apicops command-line interface
- Ensure that you follow the requirements section so that the tool will work correctly in your environment
- The generate_postmortem.sh and crunchy_gather.py or edb_gather.sh script.
- Reproduce the problem.
- On the management subsystem, run the
postmortem
andapicops
tools, and note the location where the postmortem output file is saved:- Via SSH, connect to the target appliance
- Change to the directory where the generate_postmortem.sh script was downloaded to the appliance
- Switch to the root user
sudo -i
- Execute the following command:
./generate_postmortem.sh --ova --pull-appliance-logs --diagnostic-manager
- Execute the following command:
NOTE: If the command returns an error, review the steps documented in the requirements section./apicops-linux debug:info
- Generate an error-report via the DataPower CLI.
- Repeat this step for each gateway server in the cluster
sw default
conf; save error-report
- Repeat this step for each gateway server in the cluster
- OPTIONAL (only perform this step if you performed step 3): Dump the gateway-peering debug logs via the DataPower CLI and then disable gateway-peering debug.
- Repeat this step for each gateway server in the cluster and replace GW_PEERING_OBJECT_NAME with the correct name of the peering object:
NOTE: To determine the configured peering objects, issue the following command within the apiconnect domain: show gateway-peering-statussw <apiconnect domain>
diagnostics
gateway-peering-dump GW_PEERING_OBJECT_NAME
no gateway-peering-debug GW_PEERING_OBJECT_NAME
exit
- Repeat this step for each gateway server in the cluster and replace GW_PEERING_OBJECT_NAME with the correct name of the peering object:
- Upload the following to the case:
- apiconnect-logs-*.tgz/zip file that was generated from the postmortem command on the management subsystem
- Output from apicops-linux command in step 6
- For each gateway server in the cluster:
- The gateway service log written to logtemp://gwd-log.log in the apiconnect domain
- <error report filename>.txt.gz (error report)
- gateway-peering logs (gatewaypeering.log and gatewaypeeringmonitor.log) in temporary:///<name of gateway peering object in API Connect application domain>
- Output of the following command issued from DataPower command-line interface: `show gateway-peering-status`
- Time that the error occurred or start/stop time of reproducing the error
- For a specific API that is failing - Optional and in addition to steps 9.1 - 9.3
- For the API Gateway:
- Files under temporary://
- Related yaml files
- DataPower configuration for application domain
- Probe of the failing transaction, see Configuring the API probe
- For a v5-compatible gateway:
- Related yaml files
- Probe of the failing transaction
- Export of the document cache for webapi and webapi-internal
- For the API Gateway:
How to submit diagnostic data to IBM Support |
---|
After you have collected the preceding information, and the case is opened, please see: For more details see submit diagnostic data to IBM (ECuRep) and Enhanced Customer Data Repository (ECuRep) secure upload |
Document Location
Worldwide
[{"Type":"MASTER","Line of Business":{"code":"LOB67","label":"IT Automation \u0026 App Modernization"},"Business Unit":{"code":"BU048","label":"IBM Software"},"Product":{"code":"SSMNED","label":"IBM API Connect"},"ARM Category":[{"code":"a8m50000000L0rvAAC","label":"API Connect"}],"ARM Case Number":"","Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"10.0.5;10.0.6;10.0.7;and future releases"}]
Was this topic helpful?
Document Information
Modified date:
05 June 2024
UID
ibm17148145