Troubleshooting
Problem
This document provides instructions for gathering diagnostic data for API Connect deployments on both Kubernetes and VMware.
Diagnosing The Problem
Each section in this document contains the instructions to collect the MustGather data for IBM API Connect on the associated subsystem. This data is required by IBM Support to effectively diagnose and resolve issues.
- Migration Issue: v5 to v2018
- Installation or Upgrade Issue (all subsystems): VMware deployment
- Installation or Upgrade Issue (all subsystems): Kubernetes deployment
- Management subsystem
- Developer Portal subsystem
- Analytics subsystem
- Gateway subsystem: Native Kubernetes deployment
- Gateway subsystem: IBM Cloud Pak for Integration deployment
- Gateway subsystem: OpenShift, IBM Cloud Private, or Azure deployment
- Gateway subsystem: VMware deployment or physical appliance
Migration Issue: v5 to v2018
- Upload the following to the case:
- the v5 dbextract file, which was produced with the following v5 command:
config dbextract sftp <host_name>
user <user_name> file <path/name> - .zip file of the migration utility logs directory
- Command which generated the specific errors that were observed
- Screen capture of specific errors that were observed
- OPTIONAL: if the issue occurred with the port-to-APIGW or push command, also upload the .zip file of the cloud folder that was being used in the command
- the v5 dbextract file, which was produced with the following v5 command:
Installation or Upgrade Issue (all subsystems): VMware deployment
- Download and install the latest v2018-postmortem-tool
NOTE: v2018.4.1.15+ uses Helm 3 and older versions of the postmortem tool does not work correctly on a v2018.4.1.15+ installation. Download and install the latest version. - Run the postmortem tool and note the location where the postmortem output file is saved:
- Via SSH, connect to the target appliance encountering the issue
- Switch to the root user
sudo -i
- Change to the directory where the generate_postmortem.sh script was downloaded to the appliance
- Execute the following command:
./generate_postmortem.sh 0 --ova --pull-appliance-logs
- As root user, gather the status of apic:
apic status > apic_status.out
- As root user, gather the version of apic:
apic version > apic_version.out
- Upload the following to the case:
- Any error messages received from running the
apicup subsys install
command - apiconnect-logs-*.tgz/.zip file which was generated from the postmortem command
- apic_status.out
- apic_version.out
- Archive file of the apicup project directory
- Any error messages received from running the
Installation or Upgrade Issue (all subsystems): Kubernetes deployment
- Download and install the latest v2018-postmortem-tool
NOTE: v2018.4.1.15+ uses Helm 3 and older versions of the postmortem tool does not work correctly on a v2018.4.1.15+ installation. Download and install the latest version. - Run the postmortem tool and note the location where the postmortem output file is saved:
- Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem
NOTE: If apicup was not used for installation, skip to step 3 and append --no-apicup - Change to the apicup project directory
- Execute the following command:
./generate_postmortem.sh 0 --diagnostic-all
- Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem
- Upload the following to the case:
- Any error messages received from running the
apicup subsys install
command - apiconnect-logs-*.tgz/.zip file which was generated from the postmortem command
- Archive file of the apicup project directory
- Any error messages received from running the
Management subsystem
- Download and install the latest v2018-postmortem-tool
NOTE: v2018.4.1.15+ uses Helm 3 and older versions of the postmortem tool does not work correctly on a v2018.4.1.15+ installation. Download and install the latest version. - Reproduce the problem
- Run the postmortem tool and note the location where the postmortem output file is saved:
- OVA deployment:
- Via SSH, connect to the target appliance
- Switch to the root user
sudo -i
- Change to the directory where the generate_postmortem.sh script was downloaded to the appliance
- Execute the following command:
./generate_postmortem.sh 0 --ova --pull-appliance-logs --diagnostic-manager
- IBM Cloud Pak for Integration deployment:
- Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem
NOTE: If apicup was not used for installation, skip to step 3 and append --no-apicup - Change to the apicup project directory
- Execute the following command:
./generate_postmortem.sh 0 --diagnostic-manager --cp4i
- Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem
- Native Kubernetes deployment:
- Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem
NOTE: If apicup was not used for installation, skip to step 3 and append --no-apicup - Change to the apicup project directory
- Execute the following command:
./generate_postmortem.sh 0 --diagnostic-manager
- Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem
- OpenShift, IBM Cloud Private, or Azure deployment:
- Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem
NOTE: If apicup was not used for installation, skip to step 3 and append --no-apicup - Change to the apicup project directory
- Execute the following command and replace APIC_NAMESPACE with the value for the API Connect namespace:
NOTE: if there are multiple API Connect namespaces separate each namespace with a comma. For example: --extra-namespaces=dev1,dev2,dev3./generate_postmortem.sh 0 --diagnostic-manager --extra-namespaces=APIC_NAMESPACE
- Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem
- OVA deployment:
- Upload the following to the case:
- apiconnect-logs-*.tgz/.zip file which was generated from the postmortem command
- Steps to reproduce the problem
- Time that the error occurred or start/stop time of reproducing the error
- Screen capture of error (if applicable)
Developer Portal subsystem
- Download and install the latest v2018-postmortem-tool
NOTE: v2018.4.1.15+ uses Helm 3 and older versions of the postmortem tool does not work correctly on a v2018.4.1.15+ installation. Download and install the latest version. - Reproduce the problem
- Run the postmortem tool and note the location where the postmortem output file is saved:
- OVA deployment:
- Via SSH, connect to the target appliance
- Switch to the root user
sudo -i
- Change to the directory where the generate_postmortem.sh script was downloaded to the appliance
- Execute the following command:
./generate_postmortem.sh 0 --ova --pull-appliance-logs --diagnostic-portal
- IBM Cloud Pak for Integration deployment:
- Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem
NOTE: If apicup was not used for installation, skip to step 3 and append --no-apicup - Change to the apicup project directory
- Execute the following command:
./generate_postmortem.sh 0 --diagnostic-portal --cp4i
- Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem
- Native Kubernetes deployment:
- Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem
NOTE: If apicup was not used for installation, skip to step 3 and append --no-apicup - Change to the apicup project directory
- Execute the following command:
./generate_postmortem.sh 0 --diagnostic-portal
- Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem
- OpenShift, IBM Cloud Private, or Azure deployment:
- Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem
NOTE: If apicup was not used for installation, skip to step 3 and append --no-apicup - Change to the apicup project directory
- Execute the following command and replace APIC_NAMESPACE with the value for the API Connect namespace:
NOTE: if there are multiple API Connect namespaces separate each namespace with a comma. For example: --extra-namespaces=dev1,dev2,dev3./generate_postmortem.sh 0 --diagnostic-portal --extra-namespaces=APIC_NAMESPACE
- Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem
- OVA deployment:
- Submit the following to the case:
- apiconnect-logs-*.tgz/.zip file which was generated from the postmortem command
- Steps to reproduce the problem
- Time that the error occurred or start/stop time of reproducing the error
- Screenshot of error (if applicable)
Analytics subsystem
- Download and install the latest v2018-postmortem-tool
NOTE: v2018.4.1.15+ uses Helm 3 and older versions of the postmortem tool does not work correctly on a v2018.4.1.15+ installation. Download and install the latest version. - Reproduce the problem
- Run the postmortem tool and note the location where the postmortem output file is saved:
- OVA deployment:
- Via SSH, connect to the target appliance
- Switch to the root user
sudo -i
- Change to the directory where the generate_postmortem.sh script was downloaded to the appliance
- Execute the following command:
./generate_postmortem.sh 0 --ova --pull-appliance-logs --diagnostic-analytics
- IBM Cloud Pak for Integration deployment:
- Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem
NOTE: If apicup was not used for installation, skip to step 3 and append --no-apicup - Change to the apicup project directory
- Execute the following command:
./generate_postmortem.sh 0 --diagnostic-analytics --cp4i
- Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem
- Native Kubernetes deployment:
- Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem
NOTE: If apicup was not used for installation, skip to step 3 and append --no-apicup - Change to the apicup project directory
- Execute the following command:
./generate_postmortem.sh 0 --diagnostic-analytics
- Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem
- OpenShift, IBM Cloud Private, or Azure deployment:
- Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem
NOTE: If apicup was not used for installation, skip to step 3 and append --no-apicup - Change to the apicup project directory
- Execute the following command and replace APIC_NAMESPACE with the value for the API Connect namespace:
NOTE: if there are multiple API Connect namespaces separate each namespace with a comma. For example: --extra-namespaces=dev1,dev2,dev3./generate_postmortem.sh 0 --diagnostic-analytics --extra-namespaces=APIC_NAMESPACE
- Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem
- OVA deployment:
- Upload the following to the case:
- apiconnect-logs-*.tgz/.zip file which was generated from the postmortem command
- Steps to reproduce the problem
- Time that the error occurred or start/stop time of reproducing the error
- Screenshot of error (if applicable)
Gateway subsystem: Native Kubernetes deployment
- Download and install:
- the latest v2018-postmortem-tool
NOTE: v2018.4.1.15+ uses Helm 3 and older versions of the postmortem tool does not work correctly on a v2018.4.1.15+ installation. Download and install the latest version. - the latest apicops command-line interface
- ensure that you follow the requirements section so that the tool will work correctly in your environment
- the latest v2018-postmortem-tool
- Reproduce the problem
- Run the
postmortem
andapicops
tools and note the location where the postmortem output file is saved:- If the management and gateway subsystems are installed in the same Kubernetes cluster:
- Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem
NOTE: If apicup was not used for installation, skip to step 3 and append --no-apicup - Change to the apicup project directory
- Execute the following command:
./generate_postmortem.sh 0 --diagnostic-all
- Execute the following command:
NOTE: If the command returns an error, review the steps documented in the requirements section./apicops-linux debug:info
- Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem
- If the management and gateway subsystems are installed in different Kubernetes clusters:
- On the node that the management subsystem is installed:
- Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem
NOTE: If apicup was not used for installation, skip to step 3 and append --no-apicup - Change to the apicup project directory
- Execute the following command:
./generate_postmortem.sh 0 --diagnostic-manager
- Execute the following command:
NOTE: If the command returns an error, review the steps documented in the requirements section./apicops-linux debug:info
- Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem
- On the node that the gateway subsystem is installed:
- Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem
NOTE: If apicup was not used for installation, skip to step 3 and append --no-apicup - Change to the apicup project directory
- Execute the following command:
./generate_postmortem.sh 0 --diagnostic-gateway
- Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem
- On the node that the management subsystem is installed:
- If the management and gateway subsystems are installed in the same Kubernetes cluster:
- Upload the following to the case:
- apiconnect-logs-*.tgz/.zip files generated from the postmortem commands
- output from apicops-linux command in step 3
- Time that the error occurred or start/stop time of reproducing the error
- For a specific API that is failing - Optional and in addition to steps 4.1 - 4.3
- For the API Gateway:
- Files under temporary://
- Related yaml files
- DataPower configuration for application domain
- Probe of the failing transaction, see Configuring the API probe
- For a v5-compatible gateway:
- Related yaml files
- Probe of the failing transaction
- Export of the document cache for webapi and webapi-internal
- For the API Gateway:
Gateway subsystem: IBM Cloud Pak for Integration deployment
- Download and install:
- the latest v2018-postmortem-tool
- the latest apicops command-line interface
- ensure that you follow the requirements section so that the tool will work correctly in your environment
- Reproduce the problem
- Run the
postmortem
andapicops
tools and note the location where the postmortem output file is saved:- If the management and gateway subsystems are installed on the same Kubernetes cluster:
- Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem
NOTE: If apicup was not used for installation, skip to step 3 and append --no-apicup - Change to the apicup project directory
- Execute the following command:
./generate_postmortem.sh 0 --diagnostic-all --cp4i
- Execute the following command:
NOTE: If the command returns an error, review the steps documented in the requirements section./apicops-linux debug:info
- Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem
- If the management and gateway subsystems are installed on different Kubernetes clusters:
- On the node that the management subsystem is installed:
- Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem
NOTE: If apicup was not used for installation, skip to step 3 and append --no-apicup - Change to the apicup project directory
- Execute the following command:
./generate_postmortem.sh 0 --diagnostic-manager --cp4i
- Execute the following command:
NOTE: If the command returns an error, review the steps documented in the requirements section./apicops-linux debug:info
- Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem
- On the node that the gateway subsystem is installed:
- Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem
NOTE: If apicup was not used for installation, skip to step 3 and append --no-apicup - Change to the apicup project directory
- Execute the following command:
./generate_postmortem.sh 0 --diagnostic-gateway --cp4i
- Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem
- On the node that the management subsystem is installed:
- If the management and gateway subsystems are installed on the same Kubernetes cluster:
- Upload the following to the case:
- apiconnect-logs-*.tgz/.zip files generated from the postmortem commands
- output from apicops-linux command in step 3
- Time that the error occurred or start/stop time of reproducing the error
- For a specific API that is failing - Optional and in addition to steps 4.1 - 4.3
- For the API Gateway:
- files under temporary://
- related yaml files
- DataPower configuration for application domain
- Probe of the failing transaction, see Configuring the API probe
- For a v5-compatible gateway:
- related yaml files
- Probe of the failing transaction
- Export of the document cache for webapi and webapi-internal
- For the API Gateway:
Gateway subsystem: OpenShift, IBM Cloud Private, or Azure deployment
- Download and install:
- the latest v2018-postmortem-tool
NOTE: v2018.4.1.15+ uses Helm 3 and older versions of the postmortem tool does not work correctly on a v2018.4.1.15+ installation. Download and install the latest version. - the latest apicops command-line interface
- ensure that you follow the requirements section so that the tool will work correctly in your environment
- the latest v2018-postmortem-tool
- Reproduce the problem
- Run the
postmortem
andapicops
tools and note the location where the postmortem output file is saved:- If the management and gateway subsystems are installed on the same Kubernetes cluster:
- Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem
NOTE: If apicup was not used for installation, skip to step 3 and append --no-apicup - Change to the apicup project directory
- Execute the following command and replace APIC_NAMESPACE with the value for the API Connect namespace:
NOTE: if there are multiple API Connect namespaces separate each namespace with a comma. For example: --extra-namespaces=dev1,dev2,dev3./generate_postmortem.sh 0 --diagnostic-all --extra-namespaces=APIC_NAMESPACE
- Execute the following command:
NOTE: If the command returns an error, review the steps documented in the requirements section./apicops-linux debug:info
- Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem
- If the management and gateway subsystems are installed on different Kubernetes clusters:
- On the node that the management subsystem is installed:
- Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem
NOTE: If apicup was not used for installation, skip to step 3 and append --no-apicup - Change to the apicup project directory
- Execute the following command and replace APIC_NAMESPACE with the value for the API Connect namespace:
NOTE: if there are multiple API Connect namespaces separate each namespace with a comma. For example: --extra-namespaces=dev1,dev2,dev3./generate_postmortem.sh 0 --diagnostic-manager --extra-namespaces=APIC_NAMESPACE
- Execute the following command:
NOTE: If the command returns an error, review the steps documented in the requirements section./apicops-linux debug:info
- Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem
- On the node that the gateway subsystem is installed:
- Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem
NOTE: If apicup was not used for installation, skip to step 3 and append --no-apicup - Change to the apicup project directory
- Execute the following command and replace APIC_NAMESPACE with the value for the API Connect namespace:
NOTE: if there are multiple API Connect namespaces separate each namespace with a comma. For example: --extra-namespaces=dev1,dev2,dev3./generate_postmortem.sh 0 --diagnostic-gateway --extra-namespaces=APIC_NAMESPACE
- Via SSH, connect to the target node that contains the apicup project directory used to install the subsystem
- On the node that the management subsystem is installed:
- If the management and gateway subsystems are installed on the same Kubernetes cluster:
- Upload the following to the case:
- apiconnect-logs-*.tgz/.zip files generated from the postmortem commands
- output from apicops-linux command in step 3
- Time that the error occurred or start/stop time of reproducing the error
- For a specific API that is failing - Optional and in addition to steps 4.1 - 4.3
- For the API Gateway:
- Files under temporary://
- Related yaml files
- DataPower configuration for application domain
- Probe of the failing transaction, see Configuring the API probe
- For a v5-compatible gateway:
- Related yaml files
- Probe of the failing transaction
- Export of the document cache for webapi and webapi-internal
- For the API Gateway:
Gateway subsystem: VMware deployment or physical appliance
- Via SSH, connect to the DataPower server
- Collect API Connect gateway service log data by configuring the following log target in the API Connect application domain using the CLI.
- Repeat this step for each gateway server in the cluster
sw <apiconnect domain>
configure terminal
logging target gwd-log
type file
format text
timestamp zulu
size 50000
local-file logtemp:///gwd-log.log
event apic-gw-service debug
exit
apic-gw-service;admin-state disabled;exit
apic-gw-service;admin-state enabled;exit
write mem
exit
- Repeat this step for each gateway server in the cluster
- OPTIONAL: Enable gateway-peering debug logs via the DataPower CLI.
- Repeat this step for each gateway server in the cluster and replace GW_PEERING_OBJECT_NAME with the correct name of the peering object:
NOTE: To determine the configured peering objects, issue the following command within the API Connect application domain: show gateway-peering-statussw <apiconnect domain>
diagnostics
gateway-peering-debug GW_PEERING_OBJECT_NAME
exit
- Repeat this step for each gateway server in the cluster and replace GW_PEERING_OBJECT_NAME with the correct name of the peering object:
- On the management subsystem, download and install:
- the latest v2018-postmortem-tool.
NOTE: v2018.4.1.15+ uses Helm 3 and older versions of the postmortem tool does not work correctly on a v2018.4.1.15+ installation. Download and install the latest version. - the latest apicops command-line interface
- ensure that you follow the requirements section so that the tool will work correctly in your environment
- the latest v2018-postmortem-tool.
- Reproduce the problem.
- On the management subsystem, run the
postmortem
tool and note the location where the postmortem output file is saved:- Via SSH, connect to the target appliance
- Switch to the root user
sudo -i
- Change to the directory where the generate_postmortem.sh script was downloaded to the appliance
- Execute the following command as root user:
./generate_postmortem.sh 0 --ova --pull-appliance-logs --diagnostic-manager
- On the management subsystem, run the
apicops
tool:- Via SSH, connect to the target appliance
- Switch to the root user
sudo -i
- Change to the directory where the apicops binary file was downloaded to the appliance
- Execute the following command as root user:
NOTE: If the command returns an error, review the steps documented in the requirements section./apicops-linux debug:info
- Generate an error-report via the DataPower CLI.
- Repeat this step for each gateway server in the cluster
sw default
conf; save error-report
- Repeat this step for each gateway server in the cluster
- OPTIONAL (only perform this step if you performed step 3): Dump the gateway-peering debug logs via the DataPower CLI and then disable gateway-peering debug.
- Repeat this step for each gateway server in the cluster and replace GW_PEERING_OBJECT_NAME with the correct name of the peering object:
NOTE: To determine the configured peering objects, issue the following command within the API Connect application domain: show gateway-peering-statussw <apiconnect domain>
diagnostics
gateway-peering-dump GW_PEERING_OBJECT_NAME
no gateway-peering-debug GW_PEERING_OBJECT_NAME
exit
- Repeat this step for each gateway server in the cluster and replace GW_PEERING_OBJECT_NAME with the correct name of the peering object:
- Submit the following to the case:
- apiconnect-logs-*.tgz/.zip file (which was generated from the postmortem command on the management subsystem)
- Output from apicops-linux command in step 7
- For each gateway server in the cluster:
- The gateway service log written to logtemp://gwd-log.log in the API Connect application domain
- <error report filename>.txt.gz (error report)
- OPTIONAL: gateway-peering logs (gatewaypeering.log and gatewaypeeringmonitor.log) in temporary:///<name of gateway peering object in API Connect application domain>
- Output of the following command issued from DataPower command-line interface: `show gateway-peering-status`
- Time that the error occurred or start/stop time of reproducing the error
- For a specific API that is failing - Optional and in addition to steps 10.1 - 10.3
- For the API Gateway:
- Files under temporary://
- Related yaml files
- DataPower configuration for application domain
- Probe of the failing transaction, see Configuring the API probe
- For a v5-compatible gateway:
- Related yaml files
- Probe of the failing transaction
- Export of the document cache for webapi and webapi-internal
- For the API Gateway:
How to submit diagnostic data to IBM Support |
---|
After you have collected the preceding information, and the case is opened, please see: For more details see submit diagnostic data to IBM (ECuRep) and Enhanced Customer Data Repository (ECuRep) secure upload |
Document Location
Worldwide
[{"Business Unit":{"code":"BU059","label":"IBM Software w\/o TPS"},"Product":{"code":"SSMNED","label":"IBM API Connect"},"Component":"All","Platform":[{"code":"PF004","label":"Appliance"}],"Version":"v2018.4.x","Edition":"","Line of Business":{"code":"LOB45","label":"Automation"}}]
Was this topic helpful?
Document Information
Modified date:
05 July 2023
UID
ibm10880893