IBM Support

MustGather: API Connect v2018 (all subsystems)

Troubleshooting


Problem

This document provides instructions for gathering diagnostic data for API Connect deployments on both Kubernetes and VMware. 

Diagnosing The Problem

Each section in this document contains the instructions to collect the MustGather data for IBM API Connect on the associated subsystem. This data is required by IBM Support to effectively diagnose and resolve issues.
 
 
 
 
 
 
Migration Issue: v5 to v2018
  • Upload the following to the case:
    1. the v5 dbextract file, which was produced with the following v5 command:
      config dbextract sftp <host_name> user <user_name>  file <path/name>
    2. .zip file of the migration utility logs directory
    3. Command which generated the specific errors that were observed
    4. Screen capture of specific errors that were observed
    5. OPTIONAL: if the issue occurred with the port-to-APIGW or push command, also upload the .zip file of the cloud folder that was being used in the command
 

Back to top



 
Installation or Upgrade Issue (all subsystems): VMware deployment
  1. Download and install the latest v2018-postmortem-tool
    NOTE: v2018.4.1.15+ uses Helm 3 and older versions of the postmortem tool does not work correctly on a v2018.4.1.15+ installation. Download and install the latest version.
  2. Run the postmortem tool and note the location where the postmortem output file is saved:
    1. Via SSH, connect to the target appliance encountering the issue
    2. Switch to the root user
      sudo -i
    3. Change to the directory where the generate_postmortem.sh script was downloaded to the appliance
    4. Execute the following command:
      ./generate_postmortem.sh 0 --ova --pull-appliance-logs
  3. As root user, gather the status of apic:
    apic status > apic_status.out
  4. As root user, gather the version of apic:
    apic version > apic_version.out
  5. Upload the following to the case:
    1. Any error messages received from running the  apicup subsys install command
    2. apiconnect-logs-*.tgz/.zip file which was generated from the postmortem command
    3. apic_status.out
    4. apic_version.out
    5. Archive file of the apicup project directory
 

Back to top



 
Installation or Upgrade Issue (all subsystems): Kubernetes deployment
  1. Download and install the latest v2018-postmortem-tool
    NOTE: v2018.4.1.15+ uses Helm 3 and older versions of the postmortem tool does not work correctly on a v2018.4.1.15+ installation. Download and install the latest version.
  2. Run the postmortem tool and note the location where the postmortem output file is saved:
    1. Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem
      NOTE: If apicup was not used for installation, skip to step 3 and append --no-apicup
    2. Change to the apicup project directory
    3. Execute the following command:
      ./generate_postmortem.sh 0 --diagnostic-all
  3. Upload the following to the case:
    1. Any error messages received from running the  apicup subsys install command
    2. apiconnect-logs-*.tgz/.zip file which was generated from the postmortem command
    3. Archive file of the apicup project directory
 

Back to top



 
Management subsystem
  1. Download and install the latest v2018-postmortem-tool
    NOTE: v2018.4.1.15+ uses Helm 3 and older versions of the postmortem tool does not work correctly on a v2018.4.1.15+ installation. Download and install the latest version.
  2. Reproduce the problem
  3. Run the postmortem tool and note the location where the postmortem output file is saved:
    • OVA deployment:
      1. Via SSH, connect to the target appliance
      2. Switch to the root user
        sudo -i
      3. Change to the directory where the generate_postmortem.sh script was downloaded to the appliance
      4. Execute the following command:
        ./generate_postmortem.sh 0 --ova --pull-appliance-logs --diagnostic-manager
    • IBM Cloud Pak for Integration deployment:
      1. Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem 
        NOTE: If apicup was not used for installation, skip to step 3 and append --no-apicup
      2. Change to the apicup project directory
      3. Execute the following command:
        ./generate_postmortem.sh 0 --diagnostic-manager --cp4i
    • Native Kubernetes deployment:
      1. Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem 
        NOTE: If apicup was not used for installation, skip to step 3 and append --no-apicup
      2. Change to the apicup project directory
      3. Execute the following command:
        ./generate_postmortem.sh 0 --diagnostic-manager
    • OpenShift, IBM Cloud Private, or Azure deployment:
      1. Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem 
        NOTE: If apicup was not used for installation, skip to step 3 and append --no-apicup
      2. Change to the apicup project directory
      3. Execute the following command and replace APIC_NAMESPACE with the value for the API Connect namespace:
        NOTE: if there are multiple API Connect namespaces separate each namespace with a comma. For example: --extra-namespaces=dev1,dev2,dev3
        ./generate_postmortem.sh 0 --diagnostic-manager --extra-namespaces=APIC_NAMESPACE
  4. Upload the following to the case:
    1. apiconnect-logs-*.tgz/.zip file which was generated from the postmortem command
    2. Steps to reproduce the problem
    3. Time that the error occurred or start/stop time of reproducing the error
    4. Screen capture of error (if applicable)
 

Back to top



 
Developer Portal subsystem
  1. Download and install the latest v2018-postmortem-tool
    NOTE: v2018.4.1.15+ uses Helm 3 and older versions of the postmortem tool does not work correctly on a v2018.4.1.15+ installation. Download and install the latest version.
  2. Reproduce the problem
  3. Run the postmortem tool and note the location where the postmortem output file is saved:
    • OVA deployment:
      1. Via SSH, connect to the target appliance
      2. Switch to the root user
        sudo -i
      3. Change to the directory where the generate_postmortem.sh script was downloaded to the appliance
      4. Execute the following command:
        ./generate_postmortem.sh 0 --ova --pull-appliance-logs --diagnostic-portal
    • IBM Cloud Pak for Integration deployment:
      1. Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem 
        NOTE: If apicup was not used for installation, skip to step 3 and append --no-apicup
      2. Change to the apicup project directory
      3. Execute the following command:
        ./generate_postmortem.sh 0 --diagnostic-portal --cp4i
    • Native Kubernetes deployment:
      1. Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem 
        NOTE: If apicup was not used for installation, skip to step 3 and append --no-apicup
      2. Change to the apicup project directory
      3. Execute the following command:
        ./generate_postmortem.sh 0 --diagnostic-portal
    • OpenShift, IBM Cloud Private, or Azure deployment:
      1. Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem 
        NOTE: If apicup was not used for installation, skip to step 3 and append --no-apicup
      2. Change to the apicup project directory
      3. Execute the following command and replace APIC_NAMESPACE with the value for the API Connect namespace:
        NOTE: if there are multiple API Connect namespaces separate each namespace with a comma. For example: --extra-namespaces=dev1,dev2,dev3
        ./generate_postmortem.sh 0 --diagnostic-portal --extra-namespaces=APIC_NAMESPACE
  4. Submit the following to the case:
    1. apiconnect-logs-*.tgz/.zip file which was generated from the postmortem command
    2. Steps to reproduce the problem
    3. Time that the error occurred or start/stop time of reproducing the error
    4. Screenshot of error (if applicable)
 

Back to top



 
Analytics subsystem
  1. Download and install the latest v2018-postmortem-tool
    NOTE: v2018.4.1.15+ uses Helm 3 and older versions of the postmortem tool does not work correctly on a v2018.4.1.15+ installation. Download and install the latest version.
  2. Reproduce the problem
  3. Run the postmortem tool and note the location where the postmortem output file is saved:
    • OVA deployment:
      1. Via SSH, connect to the target appliance
      2. Switch to the root user
        sudo -i
      3. Change to the directory where the generate_postmortem.sh script was downloaded to the appliance
      4. Execute the following command:
        ./generate_postmortem.sh 0 --ova --pull-appliance-logs --diagnostic-analytics
    • IBM Cloud Pak for Integration deployment:
      1. Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem 
        NOTE: If apicup was not used for installation, skip to step 3 and append --no-apicup
      2. Change to the apicup project directory
      3. Execute the following command:
        ./generate_postmortem.sh 0 --diagnostic-analytics --cp4i
    • Native Kubernetes deployment:
      1. Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem 
        NOTE: If apicup was not used for installation, skip to step 3 and append --no-apicup
      2. Change to the apicup project directory
      3. Execute the following command:
        ./generate_postmortem.sh 0 --diagnostic-analytics
    • OpenShift, IBM Cloud Private, or Azure deployment:
      1. Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem 
        NOTE: If apicup was not used for installation, skip to step 3 and append --no-apicup
      2. Change to the apicup project directory
      3. Execute the following command and replace APIC_NAMESPACE with the value for the API Connect namespace:
        NOTE: if there are multiple API Connect namespaces separate each namespace with a comma. For example: --extra-namespaces=dev1,dev2,dev3
        ./generate_postmortem.sh 0 --diagnostic-analytics --extra-namespaces=APIC_NAMESPACE
  4. Upload the following to the case:
    1. apiconnect-logs-*.tgz/.zip file which was generated from the postmortem command
    2. Steps to reproduce the problem
    3. Time that the error occurred or start/stop time of reproducing the error
    4. Screenshot of error (if applicable)
 

Back to top



 
Gateway subsystem: Native Kubernetes deployment
  1. Download and install:
    • the latest v2018-postmortem-tool
      NOTE: v2018.4.1.15+ uses Helm 3 and older versions of the postmortem tool does not work correctly on a v2018.4.1.15+ installation. Download and install the latest version.
    • the latest apicops command-line interface
      • ensure that you follow the requirements section so that the tool will work correctly in your environment
  2. Reproduce the problem
  3. Run the postmortem and apicops tools and note the location where the postmortem output file is saved:
    • If the management and gateway subsystems are installed in the same Kubernetes cluster:
      1. Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem 
        NOTE: If apicup was not used for installation, skip to step 3 and append --no-apicup
      2. Change to the apicup project directory
      3. Execute the following command:
        ./generate_postmortem.sh 0 --diagnostic-all
      4. Execute the following command:
        NOTE: If the command returns an error, review the steps documented in the  requirements section
        ./apicops-linux debug:info
    • If the management and gateway subsystems are installed in different Kubernetes clusters:
      • On the node that the management subsystem is installed:
        1. Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem 
          NOTE: If apicup was not used for installation, skip to step 3 and append --no-apicup
        2. Change to the apicup project directory
        3. Execute the following command:
          ./generate_postmortem.sh 0 --diagnostic-manager
        4. Execute the following command:
          NOTE: If the command returns an error, review the steps documented in the  requirements section
          ./apicops-linux debug:info
      • On the node that the gateway subsystem is installed:
        1. Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem 
          NOTE: If apicup was not used for installation, skip to step 3 and append --no-apicup
        2. Change to the apicup project directory
        3. Execute the following command:
          ./generate_postmortem.sh 0 --diagnostic-gateway
  4. Upload the following to the case:
    1. apiconnect-logs-*.tgz/.zip files generated from the postmortem commands
    2. output from apicops-linux command in step 3
    3. Time that the error occurred or start/stop time of reproducing the error
    4. For a specific API that is failing - Optional and in addition to steps 4.1 - 4.3
      • For the API Gateway: 
        • Files under temporary://
        • Related yaml files
        • DataPower configuration for application domain
        • Probe of the failing transaction, see Configuring the API probe
      • For a v5-compatible gateway: 
        • Related yaml files
        • Probe of the failing transaction 
        • Export of the document cache for webapi and webapi-internal
 

Back to top



 
Gateway subsystem: IBM Cloud Pak for Integration deployment
  1. Download and install:
  2. Reproduce the problem
  3. Run the postmortem and apicops tools and note the location where the postmortem output file is saved:
    • If the management and gateway subsystems are installed on the same Kubernetes cluster:
      1. Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem 
        NOTE: If apicup was not used for installation, skip to step 3 and append --no-apicup
      2. Change to the apicup project directory
      3. Execute the following command:
        ./generate_postmortem.sh 0 --diagnostic-all --cp4i
      4. Execute the following command:
        NOTE: If the command returns an error, review the steps documented in the  requirements section
        ./apicops-linux debug:info
    • If the management and gateway subsystems are installed on different Kubernetes clusters:
      • On the node that the management subsystem is installed:
        1. Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem 
          NOTE: If apicup was not used for installation, skip to step 3 and append --no-apicup
        2. Change to the apicup project directory
        3. Execute the following command:
          ./generate_postmortem.sh 0 --diagnostic-manager --cp4i
        4. Execute the following command:
          NOTE: If the command returns an error, review the steps documented in the  requirements section
          ./apicops-linux debug:info
      • On the node that the gateway subsystem is installed:
        1. Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem 
          NOTE: If apicup was not used for installation, skip to step 3 and append --no-apicup
        2. Change to the apicup project directory
        3. Execute the following command:
          ./generate_postmortem.sh 0 --diagnostic-gateway --cp4i
  4. Upload the following to the case:
    1. apiconnect-logs-*.tgz/.zip files generated from the postmortem commands
    2. output from apicops-linux command in step 3
    3. Time that the error occurred or start/stop time of reproducing the error
    4. For a specific API that is failing - Optional and in addition to steps 4.1 - 4.3
      • For the API Gateway: 
        • files under temporary://
        • related yaml files
        • DataPower configuration for application domain
        • Probe of the failing transaction, see Configuring the API probe
      • For a v5-compatible gateway: 
        • related yaml files
        • Probe of the failing transaction 
        • Export of the document cache for webapi and webapi-internal
 

Back to top



 
Gateway subsystem: OpenShift, IBM Cloud Private, or Azure deployment
  1. Download and install:
    • the latest v2018-postmortem-tool
      NOTE: v2018.4.1.15+ uses Helm 3 and older versions of the postmortem tool does not work correctly on a v2018.4.1.15+ installation. Download and install the latest version.
    • the latest apicops command-line interface
      • ensure that you follow the requirements section so that the tool will work correctly in your environment
  2. Reproduce the problem
  3. Run the postmortem and apicops tools and note the location where the postmortem output file is saved:
    • If the management and gateway subsystems are installed on the same Kubernetes cluster:
      1. Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem 
        NOTE: If apicup was not used for installation, skip to step 3 and append --no-apicup
      2. Change to the apicup project directory
      3. Execute the following command and replace APIC_NAMESPACE with the value for the API Connect namespace:
        NOTE: if there are multiple API Connect namespaces separate each namespace with a comma. For example: --extra-namespaces=dev1,dev2,dev3
        ./generate_postmortem.sh 0 --diagnostic-all --extra-namespaces=APIC_NAMESPACE
      4. Execute the following command:
        NOTE: If the command returns an error, review the steps documented in the  requirements section
        ./apicops-linux debug:info
    • If the management and gateway subsystems are installed on different Kubernetes clusters:
      • On the node that the management subsystem is installed:
        1. Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem 
          NOTE: If apicup was not used for installation, skip to step 3 and append --no-apicup
        2. Change to the apicup project directory
        3. Execute the following command and replace APIC_NAMESPACE with the value for the API Connect namespace:
          NOTE: if there are multiple API Connect namespaces separate each namespace with a comma. For example: --extra-namespaces=dev1,dev2,dev3
          ./generate_postmortem.sh 0 --diagnostic-manager --extra-namespaces=APIC_NAMESPACE
        4. Execute the following command:
          NOTE: If the command returns an error, review the steps documented in the  requirements section
          ./apicops-linux debug:info
      • On the node that the gateway subsystem is installed:
        1. Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem 
          NOTE: If apicup was not used for installation, skip to step 3 and append --no-apicup
        2. Change to the apicup project directory
        3. Execute the following command and replace APIC_NAMESPACE with the value for the API Connect namespace:
          NOTE: if there are multiple API Connect namespaces separate each namespace with a comma. For example: --extra-namespaces=dev1,dev2,dev3
          ./generate_postmortem.sh 0 --diagnostic-gateway --extra-namespaces=APIC_NAMESPACE
  4. Upload the following to the case:
    1. apiconnect-logs-*.tgz/.zip files generated from the postmortem commands
    2. output from apicops-linux command in step 3
    3. Time that the error occurred or start/stop time of reproducing the error
    4. For a specific API that is failing - Optional and in addition to steps 4.1 - 4.3
      • For the API Gateway: 
        • Files under temporary://
        • Related yaml files
        • DataPower configuration for application domain
        • Probe of the failing transaction, see Configuring the API probe
      • For a v5-compatible gateway: 
        • Related yaml files
        • Probe of the failing transaction 
        • Export of the document cache for webapi and webapi-internal
 

Back to top



 
Gateway subsystem: VMware deployment or physical appliance
  1. Via SSH, connect to the DataPower server
  2. Collect API Connect gateway service log data by configuring the following log target in the API Connect application domain using the CLI.
    • Repeat this step for each gateway server in the cluster
      sw <apiconnect domain>
      configure terminal
      logging target gwd-log
      type file
      format text
      timestamp zulu
      size 50000
      local-file logtemp:///gwd-log.log
      event apic-gw-service debug
      exit
      apic-gw-service;admin-state disabled;exit
      apic-gw-service;admin-state enabled;exit
      write mem
      exit
  3. OPTIONAL: Enable gateway-peering debug logs via the DataPower CLI.
    • Repeat this step for each gateway server in the cluster and replace GW_PEERING_OBJECT_NAME with the correct name of the peering object:
      NOTE: To determine the configured peering objects, issue the following command within the API Connect application domain: show gateway-peering-status
      sw <apiconnect domain>
      diagnostics
      gateway-peering-debug GW_PEERING_OBJECT_NAME
      exit
  4. On the management subsystem, download and install:
    • the latest v2018-postmortem-tool.
      NOTE: v2018.4.1.15+ uses Helm 3 and older versions of the postmortem tool does not work correctly on a v2018.4.1.15+ installation. Download and install the latest version.
    • the latest apicops command-line interface
      • ensure that you follow the requirements section so that the tool will work correctly in your environment
  5. Reproduce the problem.
  6. On the management subsystem, run the postmortem tool and note the location where the postmortem output file is saved:
    1. Via SSH, connect to the target appliance
    2. Switch to the root user
      sudo -i
    3. Change to the directory where the generate_postmortem.sh script was downloaded to the appliance
    4. Execute the following command as root user:
      ./generate_postmortem.sh 0 --ova --pull-appliance-logs --diagnostic-manager
  7. On the management subsystem,  run the apicops tool:
    1. Via SSH, connect to the target appliance
    2. Switch to the root user
      sudo -i
    3. Change to the directory where the apicops binary file was downloaded to the appliance
    4. Execute the following command as root user:
      NOTE: If the command returns an error, review the steps documented in the  requirements section
      ./apicops-linux debug:info
  8. Generate an error-report via the DataPower CLI.
    • Repeat this step for each gateway server in the cluster
      sw default
      conf; save error-report
  9. OPTIONAL (only perform this step if you performed step 3): Dump the gateway-peering debug logs via the DataPower CLI and then disable gateway-peering debug.
    • Repeat this step for each gateway server in the cluster and replace GW_PEERING_OBJECT_NAME with the correct name of the peering object:
      NOTE: To determine the configured peering objects, issue the following command within the API Connect application domain: show gateway-peering-status
      sw <apiconnect domain>
      diagnostics
      gateway-peering-dump GW_PEERING_OBJECT_NAME
      no gateway-peering-debug GW_PEERING_OBJECT_NAME
      exit
  10. Submit the following to the case:
    1. apiconnect-logs-*.tgz/.zip file (which was generated from the postmortem command on the management subsystem)
    2. Output from apicops-linux command in step 7
    3. For each gateway server in the cluster:
      • The gateway service log written to logtemp://gwd-log.log in the API Connect application domain
      • <error report filename>.txt.gz (error report) 
      • OPTIONAL: gateway-peering logs (gatewaypeering.log and gatewaypeeringmonitor.log) in temporary:///<name of gateway peering object in API Connect application domain>
      • Output of the following command issued from DataPower command-line interface: `show gateway-peering-status`
    4. Time that the error occurred or start/stop time of reproducing the error
    5. For a specific API that is failing - Optional and in addition to steps 10.1 - 10.3
      • For the API Gateway: 
        • Files under temporary://
        • Related yaml files
        • DataPower configuration for application domain
        • Probe of the failing transaction, see Configuring the API probe
      • For a v5-compatible gateway: 
        • Related yaml files
        • Probe of the failing transaction 
        • Export of the document cache for webapi and webapi-internal
 
  How to submit diagnostic data to IBM Support 

  After you have collected the preceding information, and the case is opened, please see:  
  Exchanging information with IBM Technical Support.

  For more details see submit diagnostic data to IBM (ECuRep) and Enhanced Customer Data Repository (ECuRep) secure upload

Back to top

Document Location

Worldwide

[{"Business Unit":{"code":"BU059","label":"IBM Software w\/o TPS"},"Product":{"code":"SSMNED","label":"IBM API Connect"},"Component":"All","Platform":[{"code":"PF004","label":"Appliance"}],"Version":"v2018.4.x","Edition":"","Line of Business":{"code":"LOB45","label":"Automation"}}]

Document Information

Modified date:
05 July 2023

UID

ibm10880893