How To
Summary
Instructions on how to gather Watson Knowledge Studio logs that use the openshiftCollector script
Steps
Preparation on Red Hat OpenShift
1. Log in to a master node of your OpenShift cluster or a Linux client machine that can access the master node.
2. Run the oc login command to log in to the OpenShift cluster
oc login https://{cluster_CA_domain}:8443 -u {admin_username} -p {admin_password}
- {cluster_CA_domain} is your cluster CA domain name
- {admin_username} is the username of the OpenShift administrator.
- {admin_password} is the password of the administrator user
3. Run the oc project command to move your project.
{cp4d_namespace} is the namespace where Cloud Pak for Data is installed. In Cloud Pak for Data, you are able to install
Watson Knowledge Studio into only the namespace where Cloud Pak for Data is installed.
oc project {cp4d_namespace}
4. Confirm that helm list command works
If you see Error: transport is closing. You need to add the parameter --tls
5. Confirm that oc get pod -n {namespace_name} command works
You will see WKS pods like {release_name}-ibm-watson-ks-xxxxxxxxx-xxxxx.
{release_name} is the Helm release name of Watson Knowledge Studio that you specified for installation.
Gather logs with openshiftCollector.sh
1. Copy openshiftCollector.sh (attached in this technote) to the master node or the Linux client machine you logged into.
# Move the script file to a temporary folder
cd /tmp/
mv ~/openshiftCollector.sh ./
chmod +x openshiftCollector.sh
cd /tmp/
mv ~/openshiftCollector.sh ./
chmod +x openshiftCollector.sh
# Add path to cpd-cli for WKS 1.2
export PATH=$PATH:{path_to_cpd-cli}
export PATH=$PATH:{path_to_cpd-cli}
# Run script
./openshiftCollector.sh --skip-auth --cluster {cluster_CA_domain}
./openshiftCollector.sh --skip-auth --cluster {cluster_CA_domain}
2. When successfully completed, you see the message -
"Successfully collected and zipped up OpenShift diagnostics data. The diagnostics data is available at ./{cluster_CA_domain}_xx_xx_xx_xx_xx_xx.tgz".
3. Copy the .tgz file and upload it to the case.
Note:
You can get the following error at runtime depending on how you get the script.
# ./openshiftCollector.sh
-bash: ./openshiftCollector.sh: /bin/bash^M: bad interpreter: No such file or directory
You can get the following error at runtime depending on how you get the script.
# ./openshiftCollector.sh
-bash: ./openshiftCollector.sh: /bin/bash^M: bad interpreter: No such file or directory
The error happens when your script file contains CR/LF codes. Remove them by sed -i 's/\r//' openshiftCollector.sh
The script is trying to get all available information and executes commands to any potential resources and it would show some errors. Find the archive file even if errors were shown. Even if the archive is not created, working folder might be left with name {cluster_CA_domain}_xx_xx_xx_xx_xx_xx. Archive this file manually and send it the support team.a
If you are working with a cluster that has nodes that are not accessible by ssh like ROKS, specify --skip-ssh to skip diagnostics with ssh login.
WKS 4.x does not require cpd-cli for installation. Specifying --skip-cpd-cli to skip availability check for cpd-cli.
Document Location
Worldwide
[{"Type":"MASTER","Line of Business":{"code":"LOB10","label":"Data and AI"},"Business Unit":{"code":"BU055","label":"Cognitive Applications"},"Product":{"code":"SS2JKR","label":"IBM Watson Knowledge Studio"},"ARM Category":[],"Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Versions"}]
Was this topic helpful?
Document Information
Modified date:
12 August 2022
UID
ibm16612431