IBM Support

***DEPRECATED generate_postmortem.sh *** MustGather: API Connect v10 (all subsystems)

Troubleshooting


Problem

THE generate_postmortem.sh TOOL HAS BEEN DEPRECATED. PLEASE DO NOT USE THIS DOCUMENT UNLESS YOU HAVE AN ISSUE USING THE apic-mustgather PYTHON SCRIPT. THE MUSTGATHER BASED UPON apic-mustgather PYTHON SCRIPT CAN BE FOUND here.

Resolving The Problem

Each section in this document contains the instructions to collect the MustGather data for IBM API Connect on the associated subsystem. This data is required by IBM Support to effectively diagnose and resolve issues.
 
 
 
 
 
 
Migration Issue: v5 to v10
  • Upload the following to the case:
    1. The v5 dbextract file that was produced with the following v5 command:
      config dbextract sftp <host_name> user <user_name>  file <path/name>
    2. .zip file of the migration utility logs directory
    3. Command that generated the specific errors that were observed
    4. Screen captures of specific errors observed
    5. OPTIONAL: If the issue occurred with the port-to-APIGW or push command, also upload the zipped cloud folder that was being used in the command

Back to top

 
 

Migration Issue: v2018 to v10.x (OVA)
  1. If the postmortem tool has not yet been installed or has not been updated in the past month, download and install the generate_postmortem.sh and crunchy_gather.py or edb_gather.sh script. 
    NOTE: Here is a link to the deprecated README for generate_postmortem.sh 
    NOTE: For API Connect v10.0.7 or later, download edb_gather.sh, for previous versions download 
    crunchy_gather.py
  2. Run the postmortem tool for failing subsystem (management, portal, or analytics) and note the location where the postmortem output file is saved:
    1. Via SSH, connect to the target appliance
    2. Change to the directory where the generate_postmortem.sh script was downloaded to the appliance
    3. Switch to the root user
      sudo -i
    4. Execute the following command:
       
      • For an issue on the Portal subsystem:
        ./generate_postmortem.sh --ova --pull-appliance-logs --diagnostic-portal
       
      • For an issue on the Analytics subsystem:
        ./generate_postmortem.sh --ova --pull-appliance-logs --diagnostic-analytics
      • For an issue on the Management subsystem:
        ./generate_postmortem.sh --ova --pull-appliance-logs --diagnostic-manager
  3. Additionally for Management subsystem when load python script succeeds:
    • Take backup of the upgrade PVC data and logs directory, as it contains details of any orphaned record deletions and other logs.
  4. Upload the following to the case:
    1. apiconnect-logs-*.tgz/zip file that was generated from the postmortem command
    2. When issue is with the Management subsystem and the extract/load python script fails:
      1. Syslogs from each v2018 node in the cluster (these will contain information on the extract job)
      2. Syslogs from each v10 node in the cluster (these will contain information on the load job)
      3. Extract pod logs from v2018
      4. Load pod logs from v10
      5. In case of load job fail, the extracted zip file which is given as input for load script
      6. The python script outputs the location of pv logs. Save the "data" and "logs" directory present in this location. The data directory contains the csv files extracted from v2018. The logs directory contains the logs and status of the extract step. The logs directory and if required some csv file related info need to be shared.
      7. Output of python script error if any.
Migration Issue: v10 to v10.x (OVA)
  1. If the postmortem tool has not yet been installed or has not been updated in the past month, download and install the generate_postmortem.sh and crunchy_gather.py or edb_gather.sh script. 
    NOTE: Here is a link to the deprecated README for generate_postmortem.sh 
    NOTE: For API Connect v10.0.7 or later, download edb_gather.sh, for previous versions download 
    crunchy_gather.py
  2. Run the postmortem tool for failing subsystem (management, portal, or analytics) and note the location where the postmortem output file is saved:
    1. Via SSH, connect to the target appliance
    2. Change to the directory where the generate_postmortem.sh script was downloaded to the appliance
    3. Switch to the root user
      sudo -i
    4. Execute the following command:
      • For an issue on the Management subsystem:
        ./generate_postmortem.sh --ova --pull-appliance-logs --diagnostic-manager
       
      • For an issue on the Portal subsystem:
        ./generate_postmortem.sh --ova --pull-appliance-logs --diagnostic-portal
       
      • For an issue on the Analytics subsystem:
        ./generate_postmortem.sh --ova --pull-appliance-logs --diagnostic-analytics
  3. Upload the following to the case:
    1. apiconnect-logs-*.tgz/zip file that was generated from the postmortem command
    2. Screen capture or output of Python script failing.
    3. Archive of the project directory (folder)
Installation or Upgrade Issue (all subsystems): VMware deployment
  1. If the postmortem tool has not yet been installed or has not been updated in the past month, download and install the generate_postmortem.sh and crunchy_gather.py or edb_gather.sh script. 
    NOTE: Here is a link to the deprecated README for generate_postmortem.sh 
    NOTE: For API Connect v10.0.7 or later, download edb_gather.sh, for previous versions download 
    crunchy_gather.py
  2. Run the postmortem tool and note the location where the postmortem output file is saved:
    1. Via SSH, connect to the target appliance encountering the issue
    2. Switch to the root user
      sudo -i
    3. Change to the directory where the generate_postmortem.sh script was downloaded to the appliance
    4. Execute the following command:
      ./generate_postmortem.sh --ova --pull-appliance-logs
  3. As root user, gather the status of apic:
    apic status > apic_status.out
  4. As root user, gather the version of apic:
    apic version > apic_version.out
  5. Upload the following to the case:
    1. Any error messages received from running the  apicup subsys install command
    2. apiconnect-logs-*.tgz/.zip file that was generated from the postmortem command
    3. apic_status.out
    4. apic_version.out
    5. Archive file of the apicup project directory
 

Back to top



 
Installation or Upgrade Issue (all subsystems): OpenShift, IBM Cloud Pak for Integration, or Kubernetes deployment
  1. If the postmortem tool has not yet been installed or has not been updated in the past month, download and install the generate_postmortem.sh and crunchy_gather.py or edb_gather.sh script. 
    NOTE: Here is a link to the deprecated README for generate_postmortem.sh 
    NOTE: For API Connect v10.0.7 or later, download edb_gather.sh, for previous versions download 
    crunchy_gather.py
  2. Run the postmortem tool and note the location where the postmortem output file is saved:
    • OpenShift or IBM Cloud Pak for Integration deployment:
      1. From a command prompt on a Unix based node that has access to the Kubernetes cluster 
      2. Change to the directory where the generate_postmortem.sh script was downloaded to the node
      3. Execute the following command and replace APIC_NAMESPACE with the value(s) for the API Connect namespace:
        NOTE: If there are multiple API Connect namespaces, separate each namespace with a comma. --extra-namespaces=openshift-operators,dev1,dev2,dev3
        • If deployed through CP4I
          ./generate_postmortem.sh --diagnostic-all --extra-namespaces=ibm-common-services,openshift-operators,APIC_NAMESPACE
        • If NOT deployed through CP4I
          ./generate_postmortem.sh --diagnostic-all --extra-namespaces=openshift-operators,APIC_NAMESPACE
    • Native Kubernetes deployment:
      1. From a command prompt on a Unix based node that has access to the Kubernetes cluster 
      2. Change to the directory where the generate_postmortem.sh script was downloaded to the node
      3. Execute the following command and replace APIC_NAMESPACE with the value(s) for the API Connect namespace:
        NOTE: If there are multiple API Connect namespaces, separate each namespace with a comma. For example: --extra-namespaces=dev1,dev2,dev3
        ./generate_postmortem.sh --diagnostic-manager --extra-namespaces=APIC_NAMESPACE
  3. Upload the following to the case:
    1. apiconnect-logs-*.tgz/zip file that was generated from the postmortem command
    2. If encountering certificate related errors post-install:, the secrets.yaml generated via the following command:
      kubectl get secrets -n NAMESPACE -o yaml > secrets.yaml
 

Back to top



 
Management subsystem
  1. If the postmortem tool has not yet been installed or has not been updated in the past month, download and install the generate_postmortem.sh and crunchy_gather.py or edb_gather.sh script. 
    NOTE: Here is a link to the deprecated README for generate_postmortem.sh 
    NOTE: For API Connect v10.0.7 or later, download edb_gather.sh, for previous versions download 
    crunchy_gather.py
  2. Reproduce the problem
  3. Run the postmortem tool and note the location where the postmortem output file is saved:
    • OVA deployment:
      1. Via SSH, connect to the target appliance
      2. Change to the directory where the generate_postmortem.sh script was downloaded to the appliance
      3. Switch to the root user
        sudo -i
      4. Execute the following command:
        ./generate_postmortem.sh --ova --pull-appliance-logs --diagnostic-manager
    • OpenShift or IBM Cloud Pak for Integration deployment:
      1. From a command prompt on a Unix based node that has access to the Kubernetes cluster 
      2. Change to the directory where the generate_postmortem.sh script was downloaded to the node
      3. Execute the following command and replace APIC_NAMESPACE with the value(s) for the API Connect namespace:
        NOTE: If there are multiple API Connect namespaces, separate each namespace with a comma. --extra-namespaces=openshift-operators,dev1,dev2,dev3
        • If deployed through CP4I
          ./generate_postmortem.sh --diagnostic-manager --extra-namespaces=ibm-common-services,openshift-operators,APIC_NAMESPACE
        • If NOT deployed through CP4I
          ./generate_postmortem.sh --diagnostic-manager --extra-namespaces=openshift-operators,APIC_NAMESPACE
    • Native Kubernetes deployment:
      1. From a command prompt on a Unix based node that has access to the Kubernetes cluster 
      2. Change to the directory where the generate_postmortem.sh script was downloaded to the node
      3. Execute the following command and replace APIC_NAMESPACE with the value(s) for the API Connect namespace:
        NOTE: If there are multiple API Connect namespaces, separate each namespace with a comma. For example: --extra-namespaces=dev1,dev2,dev3
        ./generate_postmortem.sh --diagnostic-manager --extra-namespaces=APIC_NAMESPACE
  4. Upload the following to the case:
    1. apiconnect-logs-*.tgz/zip file that was generated from the postmortem command
    2. Steps to reproduce the problem
    3. Time that the error occurred or start/stop time of reproducing the error
    4. Screen capture of error (if applicable)
 

Back to top



 
Developer Portal subsystem
  1. If the postmortem tool has not yet been installed or has not been updated in the past month, download and install the generate_postmortem.sh script. 
    NOTE: Here is a link to the deprecated README for generate_postmortem.sh
  2. Reproduce the problem
  3. Run the postmortem tool and note the location where the postmortem output file is saved:
    • OVA deployment:
      1. Via SSH, connect to the target appliance
      2. Change to the directory where the generate_postmortem.sh script was downloaded to the appliance
      3. Switch to the root user
        sudo -i
      4. Execute the following command:
        ./generate_postmortem.sh --ova --pull-appliance-logs --diagnostic-portal
    • OpenShift or IBM Cloud Pak for Integration deployment:
      1. From a command prompt on a Unix based node that has access to the Kubernetes cluster 
      2. Change to the directory where the generate_postmortem.sh script was downloaded to the node
      3. Execute the following command and replace APIC_NAMESPACE with the value(s) for the API Connect namespace:
        NOTE: If there are multiple API Connect namespaces, separate each namespace with a comma. --extra-namespaces=openshift-operators,dev1,dev2,dev3
        • If deployed through CP4I
          ./generate_postmortem.sh --diagnostic-portal --extra-namespaces=ibm-common-services,openshift-operators,APIC_NAMESPACE
        • If NOT deployed through CP4I
          ./generate_postmortem.sh --diagnostic-portal --extra-namespaces=openshift-operators,APIC_NAMESPACE
    • Native Kubernetes deployment:
      1. From a command prompt on a Unix based node that has access to the Kubernetes cluster 
      2. Change to the directory where the generate_postmortem.sh script was downloaded to the node
      3. Execute the following command and replace APIC_NAMESPACE with the value(s) for the API Connect namespace:
        NOTE: If there are multiple API Connect namespaces, separate each namespace with a comma. For example: --extra-namespaces=dev1,dev2,dev3
        ./generate_postmortem.sh --diagnostic-portal --extra-namespaces=APIC_NAMESPACE
  4. Upload the following to the case:
    1. apiconnect-logs-*.tgz/zip file that was generated from the postmortem command
    2. Steps to reproduce the problem
    3. Time that the error occurred or start/stop time of reproducing the error
    4. Screen capture of error (if applicable)
 

Back to top



 
Analytics subsystem
  1. If the postmortem tool has not yet been installed or has not been updated in the past month, download and install the generate_postmortem.sh script. 
    NOTE: Here is a link to the deprecated README for generate_postmortem.sh
  2. Reproduce the problem
  3. Run the postmortem tool and note the location where the postmortem output file is saved:
    • OVA deployment:
      1. Via SSH, connect to the target appliance
      2. Change to the directory where the generate_postmortem.sh script was downloaded to the appliance
      3. Switch to the root user
        sudo -i
      4. Execute the following command:
        ./generate_postmortem.sh --ova --pull-appliance-logs --diagnostic-analytics
    • OpenShift or IBM Cloud Pak for Integration deployment:
      1. From a command prompt on a Unix based node that has access to the Kubernetes cluster 
      2. Change to the directory where the generate_postmortem.sh script was downloaded to the node
      3. Execute the following command and replace APIC_NAMESPACE with the value(s) for the API Connect namespace:
        NOTE: If there are multiple API Connect namespaces, separate each namespace with a comma. --extra-namespaces=openshift-operators,dev1,dev2,dev3
        • If deployed through CP4I
          ./generate_postmortem.sh --diagnostic-analytics --extra-namespaces=ibm-common-services,openshift-operators,APIC_NAMESPACE
        • If NOT deployed through CP4I
          ./generate_postmortem.sh --diagnostic-analytics --extra-namespaces=openshift-operators,APIC_NAMESPACE
    • Native Kubernetes deployment:
      1. From a command prompt on a Unix based node that has access to the Kubernetes cluster 
      2. Change to the directory where the generate_postmortem.sh script was downloaded to the node
      3. Execute the following command and replace APIC_NAMESPACE with the value(s) for the API Connect namespace:
        NOTE: If there are multiple API Connect namespaces, separate each namespace with a comma. For example: --extra-namespaces=dev1,dev2,dev3
        ./generate_postmortem.sh --diagnostic-analytics --extra-namespaces=APIC_NAMESPACE
  4. Upload the following to the case:
    1. apiconnect-logs-*.tgz/zip file that was generated from the postmortem command
    2. Steps to reproduce the problem
    3. Time that the error occurred or start/stop time of reproducing the error
    4. Screen capture of error (if applicable)
 

Back to top



 
Gateway subsystem: Native Kubernetes deployment
  1. Download and install:
  2. Reproduce the problem
  3. Run the postmortem and apicops tools, and note the location where the postmortem output file is saved:
    • If the management and gateway subsystems are installed in the same Kubernetes cluster:
      1. From a command prompt on a Unix based node that has access to the Kubernetes cluster 
      2. Change to the directory where the generate_postmortem.sh script was downloaded to the node
      3. Execute the following command and replace APIC_NAMESPACE with the value(s) for the API Connect namespace:
        NOTE: If there are multiple API Connect namespaces, separate each namespace with a comma. For example: --extra-namespaces=dev1,dev2,dev3
        ./generate_postmortem.sh --diagnostic-all --extra-namespaces=APIC_NAMESPACE
      4. Execute the following command:
        NOTE: If the command returns an error, review the steps documented in the requirements section
        ./apicops-linux debug:info
    • If the management and gateway subsystems are installed in different Kubernetes clusters:
      • On the node which the management subsystem is installed:
        1. From a command prompt on a Unix based node that has access to the Kubernetes cluster 
        2. Change to the directory where the generate_postmortem.sh script was downloaded to the node
        3. Execute the following command and replace APIC_NAMESPACE with the value for the API Connect management namespace:
          ./generate_postmortem.sh --diagnostic-manager --extra-namespaces=APIC_NAMESPACE
        4. Execute the following command:
          NOTE: If the command returns an error, review the steps documented in the requirements section
          ./apicops-linux debug:info
      • On the node which the gateway subsystem is installed:
        1. From a command prompt on a Unix based node that has access to the Kubernetes cluster 
        2. Change to the directory where the generate_postmortem.sh script was downloaded to the node
        3. Execute the following command and replace APIC_NAMESPACE with the value for the API Connect gateway namespace:
          ./generate_postmortem.sh --diagnostic-gateway --extra-namespaces=APIC_NAMESPACE
  4. Upload the following to the case:
    1. apiconnect-logs-*.tgz/zip files generated from the postmortem commands
    2. Output from apicops-linux command in step 3
    3. Time that the error occurred or start/stop time of reproducing the error
    4. For a specific API that is failing - Optional and in addition to steps 4.1 - 4.3
      • For the API Gateway: 
        • files under temporary://
        • related yaml files
        • DataPower configuration for application domain
        • Probe of the failing transaction, see Configuring the API probe
      • For a v5-compatible gateway: 
        • related yaml files
        • Probe of the failing transaction 
        • Export of the document cache for webapi and webapi-internal
 

Back to top



 
Gateway Subsystem: OpenShift or IBM Cloud Pak for Integration deployment
  1. Download and install:
  2. Reproduce the problem
  3. Run the postmortem and apicops tools, and note the location where the postmortem output file is saved:
    • If the management and gateway subsystems are installed in the same Kubernetes cluster:
      1. From a command prompt on a Unix based node that has access to the Kubernetes cluster 
      2. Change to the directory where the generate_postmortem.sh script was downloaded to the node
      3. Execute the following command and replace APIC_NAMESPACE with the value(s) for the API Connect namespace:
        NOTE: If there are multiple API Connect namespaces, separate each namespace with a comma. For example: --extra-namespaces=dev1,dev2,dev3
        • If deployed through CP4I
          ./generate_postmortem.sh --diagnostic-all --extra-namespaces=ibm-common-services,openshift-operators,APIC_NAMESPACE
        • If NOT deployed through CP4I
          ./generate_postmortem.sh --diagnostic-all --extra-namespaces=openshift-operators,APIC_NAMESPACE
      4. Execute the following command:
        NOTE: If the command returns an error, review the steps documented in the requirements section
        ./apicops-linux debug:info
    • If the management and gateway subsystems are installed in different Kubernetes clusters:
      • On the node which the management subsystem is installed:
        1. From a command prompt on a Unix based node that has access to the Kubernetes cluster 
        2. Change to the directory where the generate_postmortem.sh script was downloaded to the node
        3. Execute the following command and replace APIC_NAMESPACE with the value for the API Connect management namespace:
          • If deployed through CP4I
            ./generate_postmortem.sh --diagnostic-manager --extra-namespaces=ibm-common-services,openshift-operators,APIC_NAMESPACE
          • If NOT deployed through CP4I
            ./generate_postmortem.sh --diagnostic-manager --extra-namespaces=openshift-operators,APIC_NAMESPACE
        4. Execute the following command:
          NOTE: If the command returns an error, review the steps documented in the requirements section
          ./apicops-linux debug:info
      • On the node which the gateway subsystem is installed:
        1. From a command prompt on a Unix based node that has access to the Kubernetes cluster 
        2. Change to the directory where the generate_postmortem.sh script was downloaded to the node
        3. Execute the following command and replace APIC_NAMESPACE with the value for the API Connect gateway namespace:
          • If deployed through CP4I
            ./generate_postmortem.sh --diagnostic-gateway --extra-namespaces=ibm-common-services,openshift-operators,APIC_NAMESPACE
          • If NOT deployed through CP4I
            ./generate_postmortem.sh --diagnostic-gateway --extra-namespaces=openshift-operators,APIC_NAMESPACE
  4. Upload the following to the case:
    1. apiconnect-logs-*.tgz/zip files generated from the postmortem commands
    2. Output from apicops-linux command in step 3
    3. Time that the error occurred or start/stop time of reproducing the error
    4. For a specific API that is failing - Optional and in addition to steps 4.1 - 4.3
      • For the API Gateway: 
        • files under temporary://
        • related yaml files
        • DataPower configuration for application domain
        • Probe of the failing transaction, see Configuring the API probe
      • For a v5-compatible gateway: 
        • related yaml files
        • Probe of the failing transaction 
        • Export of the document cache for webapi and webapi-internal
   

Back to top



 
Gateway subsystem: VMware deployment or physical appliance
  1. Via SSH, connect to the DataPower server
  2. Collect API Connect gateway service log data by configuring the following log target in the API Connect application domain using the CLI.
    • Repeat this step for each gateway server in the cluster
      sw <apiconnect domain>
      configure terminal
      logging target gwd-log
      type file
      format text
      timestamp zulu
      size 50000
      local-file logtemp:///gwd-log.log
      event apic-gw-service debug
      exit
      apic-gw-service;admin-state disabled;exit
      apic-gw-service;admin-state enabled;exit
      write mem
      exit
  3. OPTIONAL: Enable gateway-peering debug logs via the DataPower CLI.
    • Repeat this step for each gateway server in the cluster and replace GW_PEERING_OBJECT_NAME with the correct name of the peering object:
      NOTE: To determine the configured peering objects, issue the following command within the apiconnect domain: show gateway-peering-status
      sw <apiconnect domain>
      diagnostics
      gateway-peering-debug GW_PEERING_OBJECT_NAME
      exit
  4. On the management subsystem, download and install:
  5. Reproduce the problem.
  6. On the management subsystem, run the postmortem and apicops tools, and note the location where the postmortem output file is saved:
    1. Via SSH, connect to the target appliance
    2. Change to the directory where the generate_postmortem.sh script was downloaded to the appliance
    3. Switch to the root user
      sudo -i
    4. Execute the following command:
      ./generate_postmortem.sh --ova --pull-appliance-logs --diagnostic-manager
    5. Execute the following command:
      NOTE: If the command returns an error, review the steps documented in the requirements section
      ./apicops-linux debug:info
  7. Generate an error-report via the DataPower CLI.
    • Repeat this step for each gateway server in the cluster
      sw default
      conf; save error-report
  8. OPTIONAL (only perform this step if you performed step 3): Dump the gateway-peering debug logs via the DataPower CLI and then disable gateway-peering debug.
    • Repeat this step for each gateway server in the cluster and replace GW_PEERING_OBJECT_NAME with the correct name of the peering object:
      NOTE: To determine the configured peering objects, issue the following command within the apiconnect domain: show gateway-peering-status
      sw <apiconnect domain>
      diagnostics
      gateway-peering-dump GW_PEERING_OBJECT_NAME
      no gateway-peering-debug GW_PEERING_OBJECT_NAME
      exit
  9. Upload the following to the case:
    1. apiconnect-logs-*.tgz/zip file that was generated from the postmortem command on the management subsystem
    2. Output from apicops-linux command in step 6
    3. For each gateway server in the cluster:
      • The gateway service log written to logtemp://gwd-log.log in the apiconnect domain
      • <error report filename>.txt.gz (error report) 
      • gateway-peering logs (gatewaypeering.log and gatewaypeeringmonitor.log) in temporary:///<name of gateway peering object in API Connect application domain>
      • Output of the following command issued from DataPower command-line interface: `show gateway-peering-status`
    4. Time that the error occurred or start/stop time of reproducing the error
    5. For a specific API that is failing - Optional and in addition to steps 9.1 - 9.3
      • For the API Gateway: 
        • Files under temporary://
        • Related yaml files
        • DataPower configuration for application domain
        • Probe of the failing transaction, see Configuring the API probe
      • For a v5-compatible gateway: 
        • Related yaml files
        • Probe of the failing transaction 
        • Export of the document cache for webapi and webapi-internal
 
  How to submit diagnostic data to IBM Support 

  After you have collected the preceding information, and the case is opened, please see:  
  Exchanging information with IBM Technical Support.

  For more details see submit diagnostic data to IBM (ECuRep) and Enhanced Customer Data Repository (ECuRep) secure upload

Back to top

Document Location

Worldwide

[{"Type":"MASTER","Line of Business":{"code":"LOB67","label":"IT Automation \u0026 App Modernization"},"Business Unit":{"code":"BU048","label":"IBM Software"},"Product":{"code":"SSMNED","label":"IBM API Connect"},"ARM Category":[{"code":"a8m50000000L0rvAAC","label":"API Connect"}],"ARM Case Number":"","Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"10.0.5;10.0.6;10.0.7;and future releases"}]

Document Information

Modified date:
05 June 2024

UID

ibm17148145