IBM Support

MustGather: API Connect Management server v2018

Troubleshooting


Problem

Contact IBM Support if you experience a problem with IBM API Connect. Use the details in this document to provide all of the relevant information to the Support team to help with problem resolution.

Resolving The Problem

IMPORTANT
This document has been replaced. Please use the following document instead: API Connect v2018 postmortem tool: Collect the MustGather output required by IBM support with a single command
  • Deprecated Instructions: Do Not Use
    1. Ensure that CPU, Memory, and Disk  system requirements are met.
    2. If deployed via OVA, provide the output of the following commands and all syslogs:
       
      apic version --semver
      apic logs
      All syslogs from var/log directory on each management subsystem VM (syslog, syslog.1, syslog.2.gz, etc)
    3. If deployed via Kubernetes, provide the output of the following commands:
       
      kubectl version
      helm version
      Helm release for each APIConnect helm chart deployed:  helm get values $APIC_RELEASE --all
      helm ls -a
      kubectl get pods -a -n
      kubectl get endpoints -a -n
      kubectl get ingress -a -n
      apicup version --semver
      kubectl -n kube-system logs -l component=kube-apiserver > kube-apiserver.out
      kubectl -n kube-system logs -l component=kube-controller-manager > kube-controller-manager.out
      kubectl -n kube-system logs -l component=kube-scheduler > kube-scheduler.out
      kubectl -n kube-system get events > kube-system-events.out
      kubectl get pvc -n
      kubectl describe pods -n
      • For installation issues: Output of failing command when issued with --debug at the end. 
      • Collect log files for all pods in a namespace with the attached script ( get_pod_logs.zip ) and upload resulting apic-logs.tgz. It will need slight modification prior to execution. Alternatively, a script can be created from the content found below

       
      # $APIC_NAMESPACE is the namespace where APIConnect was deployed
      # $LOG_DIR is a destination directory for the log files, any empty dir should do
      for __pod in $(kubectl get pods -n $APIC_NAMESPACE --show-all -o name | cut -d'/' -f2); do
          for __container in $(kubectl get pod -n $APIC_NAMESPACE $__pod -o jsonpath="{.spec.containers[*].name}"); do
              kubectl logs -n $APIC_NAMESPACE $__pod -c $__container &> $LOG_DIR/${__pod}_${__container}.log
              kubectl logs --previous -n $APIC_NAMESPACE $__pod -c $__container &> $LOG_DIR/${__pod}_${__container}__previous.log
              [ $? -eq 0 ] || rm -f $LOG_DIR/${__pod}_${__container}__previous.log
          done
      done
      tar -C $LOG_DIR -cz -f apic-logs.tgz

      Submit the following output/files:

      1. Output from all the commands in step 2 or 3 above.
      2. apiconnect-up.yml file.
      3. Collect apicup plan directory if available by zipping up the plan directory that was used for installation (in case sharing certificates is an issue, skip certs directories). 
      4. For installation issues: Output of failing command when issued with --debug at the end. 

    [{"Business Unit":{"code":"BU053","label":"Cloud & Data Platform"},"Product":{"code":"SSMNED","label":"IBM API Connect"},"Component":"Management","Platform":[{"code":"PF009","label":"Firmware"}],"Version":"2018.x","Edition":"","Line of Business":{"code":"LOB45","label":"Automation"}}]

    Document Information

    Modified date:
    27 April 2020

    UID

    ibm10720165