IBM Support

MustGather for IBM Sterling Intelligent Promising Containers

Troubleshooting


Problem

MustGather information is crucial for problem determination and can save time while resolving cases related to IBM Sterling Intelligent Promising (SIP) v10 Containers. Providing all requested information upfront allows IBM Sterling Support to better understand and address your issue effectively.

After capturing the diagnostics, this document will guide you on how to share the data with IBM Support.

Diagnosing The Problem

Gathering information to open a support ticket

A valid IBM customer number, contact name, email address, and phone number are important to validate the entitlement and contact information.
Refer to the section "Accessing Software Support" in the IBM Software Support Handbook to have the full list of information necessary to open a support ticket.

To determinate the correct Severity of your concern for your business, refer the IBM Software Support Handbook.

Before You Proceed
  • Before gathering problem-specific information, please review the Best Practices document and ensure that all guidelines are followed. If any practices are not being followed, rectify these and check if the issue resolves. If problems persist, proceed to provide the following information.
A. Initial Checklist
  • Provide answers to all of the following before you gather problem-specific information requested in part B.
  • Describe the Business Impact (current/future) due to this problem
  • Indicate any project deadlines impacted
  • What is the current impact on users? How many users are affected? 
  • How frequently is the issue seen - once/multiple times a day/week, intermittently/consistently?
  • Specify whether the issue is in Development, QA, Pre-Prod, Production or all environments.
  • Is this a new flow being tested or was it working before?
  • When did the issue start? Were there any recent changes to the system such as a new deployment, functionality roll-out etc.?
  • Provide detailed replication steps for Support to test the issue along with any relevant screenshots.
B. Identify your Specific Issue with SIP Containers

Note on Command Applicability in This Must Gather

  • Kubernetes (kubectl): Typically used in vanilla Kubernetes environments, kubectl is the standard tool for interacting with Kubernetes clusters.
  • OpenShift (oc): Specially tailored for OpenShift, oc extends kubectl with additional features that cater specifically to the needs of OpenShift users.

In this Must Gather document, both kubectl and oc commands are mentioned as options for the same tasks, emphasizing their functional equivalence in context-specific scenarios. Users can choose the appropriate command based on their platform, ensuring effective management and troubleshooting of their Kubernetes or OpenShift environments.


Note: If you have cluster permissions, run the diagnostic script mentioned in Section C and attach the diagnostics. If not, proceed and provide the diagnostics for the issue you are facing.










  • Must Gather Information for Operator and Catalog Source Issues

  • Version and Pod Status

    Objective: To check the operational status and versions of Catalog Sources and associated Pods.

    Command:

    oc get catalogsource,pods -n <namespace>

    Note: Replace <namespace> with the specific namespace where the catalog source is deployed. For OpenShift, this typically is openshift-marketplace. In vanilla Kubernetes, the namespace might vary based on your setup.

  • Catalog Source Details

    Objective: To collect detailed YAML configuration data of the Catalog Sources for in-depth troubleshooting.

    Commands:

    1. General Catalog Source Information:

      oc get catalogsource -n <namespace> -o yaml > operator_troubleshooting_csinfo.yaml
    2. Specific Catalog Source Configuration:

      oc get catalogsource <catalogsource-name> -n <namespace> -o yaml > specific_catalogsource_details.yaml

      This command allows for retrieving YAML configurations of specific catalog sources, such as ibm-sip-catalog or ibm-oms-gateway-catalog, providing a granular view necessary for deeper troubleshooting.

    Notes on Namespace Specification: Ensure <namespace> correctly reflects where the catalog source is deployed. The default in OpenShift is often openshift-marketplace. It's important to adjust this based on your Kubernetes or OpenShift configuration settings.

  • Operator Group Details

    Objective: Retrieve details about the Operator Group in the specified namespace to ensure correct configuration and alignment with the deployed operators.

    Command:

    oc get OperatorGroup -n <namespace>

    Note: Replace <namespace> with the specific namespace where the subscription or operator is deployed. This is crucial as the Operator Group details need to match the namespace of the operational environment to provide accurate diagnostics.

  • Subscription Details

    Retrieve Subscription Names

    Objective: Gather the names of all subscriptions in the specified namespace to check their current status and configuration.

    Setup: Set the namespace variable to ensure all commands are consistently applied to the correct namespace:

    ns=<namespace> # Replace <namespace> with the actual namespace

    Command:

    subs=$(oc get sub -n $ns --no-headers -o custom-columns=:metadata.name)

    This command saves the names of all subscriptions in the specified namespace into a variable, making it easier to use in subsequent commands.

    Subscription Details

    Objective: After obtaining the names of the subscriptions, fetch detailed information about each subscription.

    Command:

    oc get sub $subs -n $ns -o yaml > operator_troubleshooting_subinfo.yaml

    Note: Ensure that the $ns variable is set to the namespace where the subscription or operator is deployed. This approach not only streamlines the command execution but also minimizes the risk of applying commands to the wrong namespace, ensuring accurate and relevant data collection.

  • Logs Collection

    IBM-OMS-gateway-operator Logs

    Objective: List and save logs for IBM-OMS-gateway-operator pods.

    Setup: Set the namespace variable to ensure all commands are consistently applied to the correct namespace:

    ns=<namespace> # Replace <namespace> with the actual namespace

    Commands:

    1. Get the IBM-OMS-gateway-operator Pods:

      pods_OMS_gateway=$(oc get pods -n $ns -l app.kubernetes.io/name=ibm-oms-gateway-operator -o custom-columns=NAME:.metadata.name --no-headers)
    2. Export logs if any pods are found:

      if [ -z "$pods_OMS_gateway" ]; then echo "No IBM-OMS-gateway-operator pods found in namespace $ns."; else for pod in $pods_OMS_gateway; do oc logs $pod -n $ns > ${pod}_logs.txt echo "Logs for pod $pod have been saved to ${pod}_logs.txt" done fi

    IBM-sip-operator Logs

    Objective: List and save logs for IBM-sip-operator pods.

    Commands:

    1. Get the IBM-sip-operator Pods:

      pods_sip_operator=$(oc get pods -n $ns -l app.kubernetes.io/name=ibm-sip-operator -o custom-columns=NAME:.metadata.name --no-headers)
    2. Export logs if any pods are found:

      if [ -z "$pods_sip_operator" ]; then echo "No IBM-sip-operator pods found in namespace $ns."; else for pod in $pods_sip_operator; do oc logs $pod -n $ns > ${pod}_logs.txt echo "Logs for pod $pod have been saved to ${pod}_logs.txt" done fi

    Notes:

    Ensure that the $ns variable is set to the namespace where the IBM-OMS-gateway-operator and IBM-sip-operator are deployed. This method helps maintain consistency and accuracy while collecting logs.

  • Deployment and Service Group Status

    SIP Environment Deployment Status

    Objective: Gather the status of the SIP Environment deployment.

    Command:

    oc describe sipenvironments.apps.sip.ibm.com -n <namespace> > operator_troubleshooting_deploymentstatusinfo.txt

    Note: Replace <namespace> with the actual namespace where the SIP environment is deployed. This command provides a detailed description of the SIP environment deployment status, which is saved to a file for troubleshooting.

    Service Group Status

    Objective: Retrieve the status of various service groups in YAML format.

    Setup: Set the namespace variable to ensure all commands are consistently applied to the correct environment:

    ns=<namespace> # Replace <namespace> with the actual namespace
    1. List All Instances:

      IV Service Group:

      kubectl get ivservicegroups.apps.sip.ibm.com -n $ns

      Promising Service Group:

      kubectl get promisingservicegroups.apps.sip.ibm.com -n $ns

      Utility Service Group:

      kubectl get utilityservicegroups.apps.sip.ibm.com -n $ns
    2. Set the Instance Variable:

      instance=<instance> # Replace <instance> with the actual instance name (e.g., dev, prod, etc.)
    3. Retrieve Specific Instance Details:

      IV Service Group:

      oc get IVServiceGroup $instance -n $ns -o yaml > ${instance}_IVServiceGroup.yaml

      Promising Service Group:

      oc get PromisingServiceGroup $instance -n $ns -o yaml > ${instance}_PromisingServiceGroup.yaml

      Utility Service Group:

      oc get UtilityServiceGroup $instance -n $ns -o yaml > ${instance}_UtilityServiceGroup.yaml

    Notes:

    Namespace and Instance Reference: Ensure that the $ns variable is set to the namespace where the SIP environment and service groups are deployed, and the $instance variable is set to the specific instance name.

  • Stateful Sets and CRDs

    Check the Status of Stateful Sets and Custom Resource Definitions (CRDs)

    Objective: Check the status of Stateful Sets and Custom Resource Definitions (CRDs) associated with SIP.

    Setup: Set the namespace variable to ensure all commands are consistently applied to the correct environment:

    ns=<namespace> # Replace <namespace> with the actual namespace

    Commands:

    oc get statefulset,crd -n $ns | grep sip
  • If Catalog Source Pods Are Not Being Created During SIP Deployment Due to PodSecurityStandards (PSS) Enforcement

    Adjusting Pod Security Standards

    Objective: Change the namespace to a less restrictive PSS profile to allow the creation of Catalog Source pods.

    Setup: Set the namespace variable to ensure all commands are consistently applied to the correct environment:

    ns=<namespace> # Replace <namespace> with the actual namespace where the catalog source is deployed

    Commands:

    1. Adjust PSS Profile:

      kubectl label --overwrite ns $ns pod-security.kubernetes.io/enforce=baseline
    2. Confirm Execution:

      Objective: Ensure that the command was successfully executed to comply with the necessary security standards.

      Command:

      kubectl get ns $ns --show-labels | grep pod-security.kubernetes.io/enforce=baseline

      This command verifies that the namespace label has been successfully updated.

    Notes:

    Namespace Reference: Ensure that the $ns variable is set to the namespace where the Catalog Source is deployed. Adjusting the PSS profile helps in overcoming the restrictions that prevent pod creation.

    The commands ensure that the namespace is set to a less restrictive profile, facilitating the successful deployment of Catalog Source pods.

  • Persistent Volume (PV) Access Permissions

    Ensure Correct Access Permissions for Persistent Volumes

    Objective: Verify and adjust the access permissions for Persistent Volumes used by Operators.

    Setup: Set the namespace variable:

    ns=<namespace> # Replace <namespace> with the actual namespace

    Requirements:

    • Access Mode: ReadWriteMany (RWX)
    • Storage: Minimum of 10 GB
    • Accessibility: Accessible by all containers across the cluster
    • Write Access: Owner group must have write access
    • Security Context: Set the fsGroup parameter in the SIPEnvironment custom resource

    Commands:

    1. List Persistent Volumes:

      kubectl get pv -n $ns > pv_list.txt

      Action: Save the list to a file.

    2. Describe Specific Persistent Volume:

      kubectl describe pv <pv-name> > ${pv-name}_description.txt

      Note: Replace <pv-name> with the PV name. Action: Save the description to a file.

    3. List Persistent Volume Claims (PVCs):

      kubectl get pvc -n $ns > pvc_list.txt

      Action: Save the list to a file.

    4. Describe Specific Persistent Volume Claim:

      kubectl describe pvc <pvc-name> -n $ns > ${pvc-name}_description.txt

      Note: Replace <pvc-name> with the PVC name. Action: Save the description to a file.

    5. Check and Adjust Access Permissions:

      • Check Current Access Modes:

        kubectl get pv <pv-name> -o jsonpath='{.spec.accessModes}' -n $ns > ${pv-name}_access_modes.txt

        Action: Save the access modes to a file.

      • Adjust Access Modes (if necessary):

        Note: Adjusting requires creating a new PV and PVC.

        Steps:

        1. Delete Existing PVC (ensure data is backed up):

          kubectl delete pvc <pvc-name> -n $ns
        2. Create New PVC:

          apiVersion: v1 kind: PersistentVolumeClaim metadata: name: <new-pvc-name> namespace: $ns spec: accessModes: - ReadWriteMany # Replace with the required access mode resources: requests: storage: 10Gi # Adjust as necessary storageClassName: <storage-class-name> # Ensure this matches your storage class

          Apply the new PVC:

          kubectl apply -f <new-pvc-file>.yaml

    Notes:

    Namespace Reference: Ensure $ns is set to the correct namespace. Data Backup: Backup data before deleting any PVCs. Access Modes: Use ReadWriteMany (RWX) Security Context: Set fsGroup in the SIPEnvironment custom resource.

  • Authentication Method Overview:

    JWT Authentication Details: Clarify how JWT authentication is managed. Document the entire authentication flow, including token generation, distribution, and validation mechanisms. Specify any libraries or frameworks used.

  • Custom Resource Definitions (CRDs) Configuration:

    • SIP Environment Configuration CRD: Provide the contents of the SIPEnvironment.yaml to review how the SIP environment is configured to handle JWTs.
    • OMS Environment Configuration CRD: Provide the contents of the OMEnvironment.yaml which configures OMS application's environment.
  • Error Details:

    Screenshot of the Error: Include a screenshot of the error seen when the API fails to authenticate using JWT via a REST client.

  • Logs Collection:

    Application Gateway Pod Logs: Logs from the gateway pod where JWTs are processed can uncover errors in token processing. Example command:

    oc logs -f OMS-gateway-app-75c6894d94-fqsj6 -n SIP > SIP_gateway_pod.txt

  • JWT Token Validation:

    Validate JWT Token: Verify that the JWTs generated comply with expected standards, including the header, payload, and signature.

  • Application Deployment and Configuration Changes:

    Restart on JWT Secret Changes: Note the necessity of restarting pods after changes to JWT configurations. Detail the command or process to safely restart services without causing downtime.

  • Key Pair Generation Details:

    Key Pair Generation for JWT: If you are using your own keys, provide the commands run to generate these keys.

  • General Information
    • SIPEnvironment.yaml: Attach the complete YAML configuration file for the SIP environment.
    • System Versions
      • Cassandra Version: Specify the version of Cassandra in use.
      • Kafka Version: Specify the version of Kafka in use.
      • Elasticsearch Version: Specify the version of Elasticsearch in use.
  • Cassandra Database
    • List of Keyspaces and Tables:
      • Command: cqlsh -e "DESC KEYSPACES;"
      • For each keyspace, list tables: cqlsh -e "DESC TABLES <keyspace_name>;"
  • Kafka Topics
    • List of Topics:
      • Command: kafka-topics --list --zookeeper <zookeeper_host:port>
  • OpenShift and Strimzi
    • Deployment Details: Is OMS deployed on the same cluster as SIP? Specify if using Strimzi for Kafka management.
  • Logging and Errors
    • Pod Logs with Errors: Include logs from the Pods that are throwing errors.
      • Command: kubectl logs <pod_name> -n <namespace>
  • SSL/TLS Configuration
    • SSL Enabled on External Service: Confirm if SSL is enabled and provide the configuration details if possible.
    • External Service Accessibility: Are you able to access the external service externally via a client on the cluster/server that has SIP deployed using the same certificate?
    • Certificate Details:
      • Type of Certificate File to Add to SIP Keystore: .jks or .pem
      • Common Name (CN) in Certificate: Ensure it matches the host passed in the SIPEnvironment.yaml.
      • Complete Certificate Chain: Confirm if the complete certificate chain is being added to the keystore.
      • Attach the Certificate for review.
      • Certificate Creation Command: Provide the command used to generate the certificate.
  • SSL/TLS Debugging
    • TLS and Cipher Suite Configuration: Detail the TLS versions and cipher suites enabled.
    • Client Authentication (Server Side): Specify if client authentication is enforced on the server side and provide configuration details.
  • Logs of SIP Controller Manager Pod
    • Details: Include logs to identify errors or warnings that may have occurred during the upgrade process.
    • Command: kubectl logs <sip-controller-manager-pod-name> -n <namespace>
  • Configuration Files
    • Details: Attach both the previous and current versions for comparison to spot potential misconfigurations that might impact the upgrade.
      • Previous SIPEnvironment.yaml
      • Current SIPEnvironment.yaml
  • Database Checks
    • Details: If IV Appserver and Backend Pods Are Not Coming Up, execute the following command:
    • Command: SELECT * FROM inv_upgrade; in the Cassandra database to review upgrade-related records.
    • Details: Useful for identifying records that might be causing the upgrade process to stall.
  • Logs from Related Pods
    • Details: These logs can help troubleshoot specific issues with processes that are integral parts of the SIP infrastructure.
    • Include Logs from:
      • SIP IV Operations Pod
      • SIP IV Onboarding Pod
      • SIP-IV-Upgrade Pod
    • Command: kubectl logs <pod-name> -n <namespace>
  • Status of Custom Resources (CR)
    • Details: Viewing the status of CRs can reveal if they are in a failed state or experiencing configuration pending issues.
    • Check the Status of CR:
      • Command: kubectl get cr <resource-name> -n <namespace> -o yaml
    • Check Events Related to CR:
      • Details: This helps detect any recent changes or issues reported by Kubernetes that could be impacting the resources.
      • Command: kubectl describe cr <resource-name> -n <namespace>
  • Kubernetes Cluster Health
    • Details: It's critical to ensure there are no resource shortages, node failures, or connectivity issues that could undermine the upgrade.
    • Check Node Health and Resource Allocation:
      • Command: kubectl get nodes and kubectl describe node <node-name>
  • Configuration Files
    • SIPEnvironment.yaml and OMSEnvironment.yaml
      • Details: Attach the configuration files for both SIP and OMS to review the setup and configurations that could be impacting the integration.
  • Configuration Maps and Properties
    • customer_overrides.properties
      • Details: If customer_overrides.properties is defined within a ConfigMap, share the details of this configuration. This is crucial for understanding any custom settings that might influence the behavior of the OMS deployment.
      • Command: kubectl describe configmap <configmap-name> -n <namespace>
  • Diagnostics and Tracing
    • Tracing in OMS Deployment
      • Details: Depending on the issue—whether it's with the application, agent, or integration server—apply VERBOSE tracing on the relevant OMS component to gather detailed diagnostic data.
      • Instructions: Refer to the provided video tutorial for guidance on setting up tracing for specific OMS components. This will assist in identifying where the integration process is breaking down.Watch the tutorial here
      • Caution: VERBOSE tracing is intensive and not recommended for production environments. In production scenarios, enable UserTracing for a controlled, single-user test instead.
  • Security and Authentication
    • JWT Authentication Details
      • Details: Clarify the JWT authentication process used within the SIP environment, including token generation, distribution, and validation steps.
      • Implementation: Detail any specific libraries or frameworks employed for managing JWTs, and how these are configured within SIPEnvironment.yaml.
  • OMS Secrets Management
    • Details: Examine the secrets related to the OMS deployment, essential for the integration process.
    • Command: kubectl get secret <secret-name> -n <namespace> -o yaml (Make sure to anonymize or mask any sensitive information before sharing.)
    • Purpose: Ensuring that all secrets are correctly configured and secure is vital for maintaining the integrity of the integration process.
  • Configuration Files
    • SIPEvironment.yaml and OMSEnvironment.yaml
      • Details: Provide the configuration files for both SIP and OMS environments to understand the setup and configurations that might be impacting the integration. These files will give insight into any potential misconfigurations or discrepancies that could be causing issues with integration.
  • Network Logs
    • Network (.har) Logs from Order Hub UI
      • Details: Capture and include the HAR (HTTP Archive) files from the Order Hub UI. These files record the web browser's interaction with the site and can be crucial for identifying issues in the network layer that may be affecting the integration.
      • Instructions: To generate a HAR file, follow these steps:
        • Open your web browser's developer tools (typically accessible by right-clicking on the webpage and selecting "Inspect" or pressing F12).
        • Go to the "Network" tab.
        • Ensure recording (red circle icon) is enabled and clear any existing logs if necessary.
        • Reproduce the issue to capture relevant network activity.
        • Right-click within the network table and select "Save all as HAR with content".
      • Usage Note: These logs contain detailed information about web requests and responses that can be critical for identifying issues such as API failures, slow responses, or incorrect data exchange between systems.
  • Integrated SIP Logging
    • Details: If you are using an integrated SIP environment, enable detailed logging to capture the operational flow and any issues.
    • Instructions: Follow the documented steps here to enable logging, trigger the workflow, and capture comprehensive logs.
    • Action: Share these logs as they can provide critical insights into the failures or operational hitches experienced during the process.
  • Non-Integrated SIP Tracing
    • Details: For instances where SIP is not integrated into the main system, it's crucial to enable tracing to diagnose issues effectively.
    • Instructions: Adhere to the steps documented for IV, Promising, or Utility Service to generate detailed tracing information.
    • Action: This tracing will highlight detailed interactions and pinpoint where failures or misconfigurations are occurring.
  • Specific API Call Failure
    • Details: If a particular API call is failing:
    • Instructions: In your REST client, set the Log-Level header to VERBOSE. This setting increases the log verbosity, providing a detailed account of the API transaction.
    • Command: Invoke the failing API and ensure to capture the logs.
    • Usage Note: VERBOSE logs contain extensive information, so use them judiciously, especially in production environments, to avoid log flooding.
    • Action: Share the logs gathered post-invocation to diagnose what may be causing the API to fail.
  • SIPEnvironment.yaml Configuration
    • Details: The SIPEnvironment.yaml file is crucial as it contains environment-specific configurations that might affect the operation of SIP APIs.
    • Action: Review and share the SIPEnvironment.yaml file to ensure it's correctly configured and to identify any potential misconfigurations that might lead to issues with API processes.
  • Configuration Files
    • SIPEnv.yaml and OMSEnv.yaml
    • Details: Provide these configuration files to check for any misconfigurations or settings that could be affecting performance.
    • Action: Review these files for settings related to resource allocations, such as CPU limits, memory limits, and any specific performance tunings that have been applied.
  • Custom Resource Definitions
    • IV, Promising, and Utility ServiceGroups
    • Details: Include the Custom Resource (CR) definitions to review their production configurations.
    • Commands:
      • List Instances of Custom Resource: kubectl get <resource-name> -n sip
      • Retrieve YAML for Specific Instance: kubectl get <resource-name> <instance-name> -o yaml -n sip
    • Example Commands:
      • IV ServiceGroup:
        • List All Instances: kubectl get ivservicegroups.apps.sip.ibm.com -n sip
        • Get YAML for Specific Instance (e.g., dev): kubectl get ivservicegroups.apps.sip.ibm.com dev -o yaml -n sip
      • Promising ServiceGroup:
        • List All Instances: kubectl get promisingservicegroups.apps.sip.ibm.com -n sip
        • Get YAML for Specific Instance (e.g., dev): kubectl get promisingservicegroups.apps.sip.ibm.com dev -o yaml -n sip
      • Utility ServiceGroup:
        • List All Instances: kubectl get utilityservicegroups.apps.sip.ibm.com -n sip
        • Get YAML for Specific Instance (e.g., dev): kubectl get utilityservicegroups.apps.sip.ibm.com dev -o yaml -n sip
  • System Resource Utilization
    • Node and Pod Metrics
    • Details: Check the resource utilization metrics for nodes and pods within the Kubernetes cluster to identify resource constraints or abnormal usage patterns.
    • Command:
      • kubectl top nodes
      • kubectl top pods -n <namespace>
    • Monitoring Tools Data
    • Details: Utilize monitoring tools such as Prometheus or Grafana to collect data on CPU usage, memory usage, I/O operations, and network metrics.
    • Action: Review historical data from these tools to pinpoint when performance degradation starts and correlate it with changes in the environment.
  • Core Dump Generation
    • Generate Core Dump
    • Details: In the event of a crash or significant slowdown, generate a core dump to get a snapshot of the application’s memory.
    • Commands:
      • Exec into the concerned pod: kubectl exec -it <pod-name> -n <namespace> -- /bin/bash
      • Identify the Process ID: Run the following command to list detailed process info: for pid in $(ls /proc | grep '^[0-9]*$'); do echo "PID: $pid, Process: $(cat /proc/$pid/comm)"; echo "Command Line: $(cat /proc/$pid/cmdline)"; echo "Current Working Directory: $(ls -l /proc/$pid/cwd)"; echo "---------------------------------"; done
      • Generate the dump: kill -3 <pid>
    • Note: Ensure that the core dumps are saved under the /dumps folder within the pod.
    • Action: Copy the core dump from the pod to a local machine using kubectl cp and attach it to the case for detailed analysis.
  • Diagnostic Script
    • If you have cluster permissions, run the diagnostic script mentioned in Section C and attach the diagnostics.
  • Network Logs
    • Network (.har) Logs
    • Details: For any UI related slowness, capture and include HAR (HTTP Archive) files from the user interface to diagnose any network-related delays or issues.
    • Instructions: Follow steps to generate HAR files:
      • Open your web browser's developer tools (typically accessible by right-clicking on the webpage and selecting "Inspect" or pressing F12).
      • Go to the "Network" tab.
      • Ensure recording (red circle icon) is enabled and clear any existing logs if necessary.
      • Reproduce the issue to capture relevant network activity.
      • Right-click within the network table and select "Save all as HAR with content".
    • Usage Note: These logs contain detailed information about web requests and responses that can be critical for identifying issues such as API failures, slow responses, or incorrect data exchange between systems.
  • Purpose

    This script is designed to collect comprehensive diagnostic data from your Kubernetes namespace. It gathers information about Pods, Network Policies, Custom Resources, Persistent Storage, Nodes, and various other components. The script compiles the data into an HTML report and compresses everything into a single archive file for easy sharing.

    Usage Instructions

    Download the Script:

    Save the provided script as diagnostics_script.sh on your local machine.

    Make the Script Executable:

    Open a terminal and navigate to the directory where the script is saved. Run the following command to make the script executable:

    chmod +x diagnostics_script.sh

    Run the Script:

    Execute the script by specifying the required arguments:

    If you are using OpenShift, use oc:
    To gather diagnostics for a specific pod in a namespace:
    ./diagnostics_script.sh -c oc -p <PodName> -n <NameSpace>

    Replace <PodName> with the name of your pod and <NameSpace> with your namespace.

    To gather diagnostics for all pods in a namespace:
    ./diagnostics_script.sh -c oc -a <NameSpace>

    Replace <NameSpace> with your namespace.

    If you are using vanilla Kubernetes, use kubectl:
    To gather diagnostics for a specific pod in a namespace:
    ./diagnostics_script.sh -c kubectl -p <PodName> -n <NameSpace>

    Replace <PodName> with the name of your pod and <NameSpace> with your namespace.

    To gather diagnostics for all pods in a namespace:
    ./diagnostics_script.sh -c kubectl -a <NameSpace>

    Replace <NameSpace> with your namespace.

    Collect the Output:

    After the script completes, it will generate a compressed file named Diagnostics.tgz containing all the collected data and the HTML report.

    Share the Output:

    Attach the Diagnostics.tgz file and share it with us for further analysis.

    What the Script Does

    Collects Detailed Diagnostics: Gathers detailed information about various Kubernetes components in the specified namespace.
    Generates an HTML Report: Creates an intuitive and user-friendly HTML report summarizing the collected data.
    Compresses Data: Compiles all the collected data and the HTML report into a single compressed file (Diagnostics.tgz).

    Important Note

    This script does not capture any secret or configmap data, ensuring that sensitive information is not included in the diagnostic package.

    Example Commands

    To gather diagnostics for the pod sip-cas-appserver-695b685bbd-4cn8m in the sip namespace using oc:

    ./diagnostics_script.sh -c oc -p sip-cas-appserver-695b685bbd-4cn8m -n sip

    To gather diagnostics for the pod sip-cas-appserver-695b685bbd-4cn8m in the sip namespace using kubectl:

    ./diagnostics_script.sh -c kubectl -p sip-cas-appserver-695b685bbd-4cn8m -n sip

    To gather diagnostics for all pods in the sip namespace using oc:

    ./diagnostics_script.sh -c oc -a sip

    To gather diagnostics for all pods in the sip namespace using kubectl:

    ./diagnostics_script.sh -c kubectl -a sip

    After the script runs, you will find the Diagnostics.tgz file in the same directory. Please attach this file and share it with us for troubleshooting and support.

How to submit diagnostic data to IBM Support


General IBM Support hints and tips

Here you can find a list of useful links for IBM Order Hub and the Support processes:

 

Document Location

Worldwide

[{"Type":"MASTER","Line of Business":{"code":"LOB77","label":"Automation Platform"},"Business Unit":{"code":"BU048","label":"IBM Software"},"Product":{"code":"SS6PEW","label":"Sterling Order Management"},"ARM Category":[{"code":"a8m0z000000cy05AAA","label":"BUC"}],"ARM Case Number":"","Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Versions"}]

Product Synonym

Sterling Intelligent Promising Containers;SIP;SIP Containers

Document Information

Modified date:
18 July 2025

UID

ibm17150389