SD-WAN Versa Collector Troubleshooting Guide
This document provides some useful troubleshooting details for SD-WAN Versa collector.
Debug SD-WAN Collector
The following are some scenarios on how you may debug an issue while deploying SD-WAN Versa Collector.
Question | Description / Command(s) | |
---|---|---|
1 | Are you logged in as root or sevone user? | All commands must be run as sevone user. |
2 | If you are performing a fresh deployment, have you checked that there are no IP-range conflicts. | Please refer to SD-WAN Versa Collector Use-Cases Guide > Use-Cases > section Handle IP Conflicts for details on IP address ranges. |
3 | How can SD-WAN collector, Kubernetes, Helm, SOA, and SevOne NMS application versions be obtained? |
On SD-WAN collector, execute the following commands
On SevOne NMS appliance, execute the following command
|
4 | Is the Kubernetes cluster healthy? | Execute the following commands.
|
5 | How can the application logs for a suspected pod be obtained? |
Get logs for collector where it is deployed
Example: Get logs for other pods
On SevOne NMS appliance
|
6 | How can traces for the request be obtained if it involves retrieving data from SevOne NMS? |
On SevOne NMS appliance
|
7 | What should you do if an issue is related to the User Interface? | You must collect the console logs from your browser. |
8 | What should you do if data is not visible for the tunnel, device health, and interface queue objects? |
|
Helpful CLI commands
- After creating the virtual machine, you may want to change its name. Execute the following
steps.
$ ssh sevone@<SD-WAN Versa collector node IP address> $ sudo hostnamectl set-hostname "<enter hostname>" $ sudo reboot
- When provisioning of control plane node is complete via user interface, ensure that
control plane node is correctly provisioned from
CLI.
$ ssh sevone@<SD-WAN Versa 'control plane' node IP address> $ kubectl get nodes NAME STATUS ROLES AGE VERSION sdwan-node01 Ready control-plane,master 2m17s v1.27.1+k3s1
- When the agent nodes have joined the Kubernetes cluster, execute the following command to
confirm the
same.
$ ssh sevone@<SD-WAN Versa 'control plane' node IP address> $ kubectl get nodes NAME STATUS ROLES AGE VERSION sdwan-node01 Ready control-plane,master 2m17s v1.27.1+k3s1 sdwan-node02 Ready <none> 2m17s v1.27.1+k3s1 sdwan-node03 Ready <none> 2m17s v1.27.1+k3s1
- To check the status of the deployment, ensure that all the pods are in Running
status.
$ kubectl get pods NAME READY STATUS RESTARTS AGE solutions-sdwan-versa-redis-master-0 1/1 Running 0 21m solutions-sdwan-versa-redis-replicas-0 1/1 Running 0 21m solutions-sdwan-versa-upgrade-sn78p 0/1 Completed 0 21m solutions-sdwan-versa-aug-decoder-58fc5dfc6d-9l6kw 1/1 Running 0 21m solutions-sdwan-versa-create-keys-2-cf252 0/1 Completed 0 21m solutions-sdwan-versa-collector-5c6f7fd4b8-g6k8x 1/1 Running 0 21m
Pod Stuck in a terminating State
If a pod is ever stuck and you want it to reboot, you can append --grace-period=0 --force to the end of your delete pod command.
Example
$ ssh sevone@<SD-WAN Versa collector 'control plane' node IP address or hostname>
$ kubectl delete pod $(kubectl get pods | grep 'dsm' | awk '{print $1}') --grace-period=0 --force
Cached metadata error during upgrade process
If you see the following error during an upgrade, run the command below to delete the cached metadata files.This will prompt yum to fetch new metadata the next time it runs.
TASK [Install convert2rhel and latest kernel]
**************************************
**************************************
**************************************
fatal: [sevonek8s]: FAILED! => {"changed": false, "msg":
"No package matching 'convert2rhel' found available, installed or updated", "rc": 126, "results":
["All packages providing kernel are up to date", "No package matching 'convert2rhel' found available, installed or updated"]}
$ sudo yum clean metadata
Redeploy / Update Configuration
If you are deploying the same build again or have updated /opt/SevOne/chartconfs/solutions-sdwan-versa_custom_guii.yaml file, the following commands must be executed.
Applies only when configuration has been updated. The helm command uninstalls the deployment along with the base configuration which by default, is available with the .ova image file.
$ sevone-cli playbook precheck
$ sevone-cli solutions reload
- collectorService
- affinity
- flowAugmentorService
- augmentor service
- receiverPort
Review / Collect Logs
For collector pod
- Obtain the node IP where the collector pod is running for SD-WAN Versa collector to check the
logs.
$ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES solutions-sdwan-versa-redis-master-0 1/1 Running 0 22h 192.168.80.18 sevonek8s <none> <none> solutions-sdwan-versa-redis-replicas-0 1/1 Running 0 22h 192.168.80.20 sevonek8s <none> <none> solutions-sdwan-versa-upgrade-sn78p 0/1 Completed 0 5h34m 192.168.80.21 sevonek8s <none> <none> solutions-sdwan-versa-aug-decoder-58fc5dfc6d-9l6kw 1/1 Running 0 5h34m 10.49.12.2 sevonek8s <none> <none> solutions-sdwan-versa-create-keys-2-cf252 0/1 Completed 0 5h34m 192.168.80.24 sevonek8s <none> <none> solutions-sdwan-versa-collector-5c6f7fd4b8-g6k8x 1/1 Running 0 5h34m 192.168.80.23 sevonek8s <none> <none>
Note:- The pod name for SD-WAN Versa collector returned is solutions-sdwan-versa-collector-cb94bcc77-lvn4t.
- The node IP for SD-WAN Versa collector returned is 192.168.80.23.
- Check the logs for SD-WAN Versa collector, for example.
- Using ssh, log into SD-WAN Versa collector node as sevone.
$ ssh sevone@<SD-WAN Versa collector node IP address>
Example$ ssh sevone@192.168.80.23
- Change directory to /var/log/sdwan-versa/<collector_name>/<build_version>.
$ cd /var/log/sdwan-versa/<collector_name>/<build_version>
Example
$ cd /var/log/sdwan-versa/versa/7.0.0-build.<###>/
You should see the following folders in this directory. The main folder displays all common logs, whereas agent-specific logs can be found within their respective folders.- ClearAlertsAgent
- DeviceHealthStreamingAgent
- InstallerAgent
- InterfaceStatStreamingAgent
- MigrationAgent
- CreateAlertsStreamingAgent
- FlowAgent
- InterfaceQueueStreamingAgent
- main
- ObjectDescriptionAgent
- DeviceDescriptionAgent
- FlowInterfaceCacheAgent
- InterfaceStatAgent
- MetadataAgent
- TunnelStatStreamingAgent
- Check logs for InstallerAgent. Similarly, you can check logs for all other agents.
Example
$ cat InstallerAgent/versa_InstallerAgent_7.0.0-build.<###>.log 2023-08-26T00:00:00Z INF Sending SOA request... agent=InstallerAgent endpoint=/sevone.api.v3.Metadata/ObjectTypes requestId=12058 2023-08-26T00:00:00Z INF Received SOA response agent=InstallerAgent elapsed=94.761688ms requestId=12058 2023-08-26T00:00:00Z INF Sending SOA request... agent=InstallerAgent endpoint=/sevone.api.v3.Metadata/IndicatorTypes requestId=12061 2023-08-26T00:00:00Z INF Received SOA response agent=InstallerAgent elapsed=79.449453ms requestId=12061 2023-08-26T00:00:00Z INF Run agent start agent=InstallerAgent ... ... ... 2023-08-28T00:02:35Z INF Load done agent=InstallerAgent 2023-08-28T00:02:35Z INF Run agent done agent=InstallerAgent elapsed=2m35.040937749s 2023-08-28T00:02:35Z INF Sending selfmon info to NMS agent=InstallerAgent 2023-08-28T00:02:35Z INF Sending request... agent=InstallerAgent method=POST requestId=25505 url=https://<vDirector IP>/api/v2/devices/data 2023-08-28T00:02:35Z INF Received response agent=InstallerAgent elapsed=12.598471ms requestId=25505 status="200 OK"
Note: If you see INF Run agent done agent=InstallerAgent, then you are ready for the build step. If the command does not return this log message, please contact IBM SevOne Support.
- Using ssh, log into SD-WAN Versa collector node as sevone.
- The build step prepares your SD-WAN Versa collector. It executes the conntrack command
that clears out all entries from the conntrack table and restarts the collector pod.
For single vDirector
$ sevone-cli solutions run_buildstep --deployment_name=<deployment_name>
Example
$ sevone-cli solutions run_buildstep --deployment_name=solutions-sdwan-versa
Note: The deployment name is the name of the application specified in the directory /etc/ansible/group_vars/all.For multi-vDirector (To delete two collector pods)
$ sevone-cli solutions run_buildstep
For augmentor pod
-
Obtain the node IP where the augmentor pod is running to check the logs.
$ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES solutions-sdwan-versa-redis-master-0 1/1 Running 0 22h 192.168.80.18 sevonek8s <none> <none> solutions-sdwan-versa-redis-replicas-0 1/1 Running 0 22h 192.168.80.20 sevonek8s <none> <none> solutions-sdwan-versa-upgrade-sn78p 0/1 Completed 0 5h34m 192.168.80.21 sevonek8s <none> <none> solutions-sdwan-versa-aug-decoder-58fc5dfc6d-9l6kw 1/1 Running 0 5h34m 10.49.12.2 sevonek8s <none> <none> solutions-sdwan-versa-create-keys-2-cf252 0/1 Completed 0 5h34m 192.168.80.24 sevonek8s <none> <none> solutions-sdwan-versa-collector-5c6f7fd4b8-g6k8x 1/1 Running 0 5h34m 192.168.80.23 sevonek8s <none> <none>
Note:- The pod name for SD-WAN Versa augmentor returned is solutions-sdwan-versa-aug-decoder-58fc5dfc6d-9l6kw.
- The node IP for SD-WAN Versa augmentor returned is 10.49.12.2.
- Check the logs for SD-WAN Versa augmentor pod.
$ kubectl logs <augmentor pod name>
Example
$ kubectl logs solutions-sdwan-versa-aug-decoder-58fc5dfc6d-9l6kw
- To view old logs, you can find them located at /opt/SevOne/logs/pods/ on the augmentor node.
For other pods
Logs can be collected at the pod level. The status of pods must be Running.
By default, resource-type = pod. For logs where resource-type = pod, you may choose to only pass the pod-name only; resource-type is optional.
Using ssh, log into SD-WAN collector control plane node as sevone.
$ ssh sevone@<SD-WAN Versa collector 'control plane' node IP address or hostname>
Ensure that all pods are either Running or Completed.
Example: Get 'pod' names
$ kubectl get pods
Get resource types
Get 'all' resource types
$ kubectl get all | more
Get resource type for a pod
$ kubectl get all | grep <pod-name>
Example: Get resource type for pod-name containing 'solutions-sdwan'
$ kubectl get all | grep solutions-sdwan
pod/solutions-sdwan-versa-redis-master-0 1/1 Running 0 22h
pod/solutions-sdwan-versa-redis-replicas-0 1/1 Running 0 22h
pod/solutions-sdwan-versa-upgrade-sn78p 0/1 Completed 0 5h37m
pod/solutions-sdwan-versa-aug-decoder-58fc5dfc6d-9l6kw 1/1 Running 0 5h37m
pod/solutions-sdwan-versa-create-keys-2-cf252 0/1 Completed 0 5h37m
pod/solutions-sdwan-versa-collector-5c6f7fd4b8-g6k8x 1/1 Running 0 5h37m
service/solutions-sdwan-versa-redis-headless ClusterIP None <none> 6379/TCP 22h
service/solutions-sdwan-versa-redis-master ClusterIP 192.168.107.232 <none> 6379/TCP 22h
service/solutions-sdwan-versa-redis-replicas ClusterIP 192.168.107.107 <none> 6379/TCP 22h
service/solutions-sdwan-versa LoadBalancer 192.168.101.63 10.49.12.2 50001:19856/UDP 22h
service/solutions-sdwan-versa-flowservice LoadBalancer 192.168.102.61 10.49.12.2 9992:9992/UDP 5h37m
deployment.apps/solutions-sdwan-versa-aug-decoder 1/1 1 1 5h37m
deployment.apps/solutions-sdwan-versa-collector 1/1 1 1 22h
replicaset.apps/solutions-sdwan-versa-collector-7d76f974df 0 0 0 22h
replicaset.apps/solutions-sdwan-versa-aug-decoder-58fc5dfc6d 1 1 1 5h37m
replicaset.apps/solutions-sdwan-versa-collector-5c6f7fd4b8 1 1 1 5h37m
statefulset.apps/solutions-sdwan-versa-redis-master 1/1 22h
statefulset.apps/solutions-sdwan-versa-redis-replicas 1/1 22h
job.batch/solutions-sdwan-versa-upgrade 1/1 3s 5h37m
job.batch/solutions-sdwan-versa-create-keys-2 1/1 8s 5h37m
solutions-sdwan-versa in the example above is the pod name.
Get logs
$ kubectl logs <resource-type>/<pod-name>
Example: Get logs for pod-name 'solutions-sdwan-versa-redis-master'
$ kubectl logs statefulset.apps/solutions-sdwan-versa-redis-master
OR
$ kubectl logs sts/solutions-sdwan-versa-redis-master
Example: Get logs for pod-name 'solutions-sdwan-versa-redis-master' with timestamps
$ kubectl logs statefulset.apps/solutions-sdwan-versa-redis-master --timestamps
OR
$ kubectl logs sts/solutions-sdwan-versa-redis-master --timestamps
By default, resource-type = pod.
Example: <resource-type> = pod; <resource-type> is optional
$ kubectl logs pod/solutions-sdwan-versa-redis-master-0
OR
$ kubectl logs solutions-sdwan-versa-redis-master-0
Collect Logs for a Pod with One Container
-
Using ssh, log into SD-WAN collector control plane node as sevone.
$ ssh sevone@<SD-WAN Versa collector 'control plane' node IP address or hostname>
-
Obtain the list of containers that belong to a pod.
Example: Pod name 'di-mysql-0' contains one container, 'mysql'
$ kubectl get pods solutions-sdwan-versa-redis-master-0 -o jsonpath='{.spec.containers[*].name}{"\n"}' redis
-
Collect logs.
Important: For pods with one container only, -c <container-name> in the command below is optional.$ kubectl logs <pod-name> -c <container-name> or $ kubectl logs <pod-name>
Example
$ kubectl logs solutions-sdwan-versa-redis-master-0 -c redis or $ kubectl logs solutions-sdwan-versa-redis-master-0
Start Collector
To start the collector, execute the following commands.
- Using ssh, log into SD-WAN collector control plane node as
sevone.
$ ssh sevone@<SD-WAN collector 'control plane' node IP address or hostname>
- Start the
collector.
$ sevone-cli solutions reload
Scenario-1 (When no changes to collectorConfig file and secrets)
Note:- create-keys pods will restart to set up NMS v2 api keys and v3 api keys.
- Collector pod will not restart.
Scenario-2 (When no changes in secrets but changes in collectorConfig file)
Note:- create-keys pods will restart.
- Only the changed collector pod will restart.
Scenario-3 (When changes in secrets but no changes in collectorConfig file)
Note:- create-keys pods will restart.
- Only the changed collector pod will restart.
Scenario-4 (When changes in secrets and collectorConfig file)
Note:- create-keys pods will restart.
- Only the changed collector pod will restart.
Stop Collector
To stop the collector, execute the following commands.
- Using ssh, log into SD-WAN collector control plane node as
sevone.
$ ssh sevone@<SD-WAN collector 'control plane' node IP address or hostname>
- Stop the
collector.
$ sevone-cli solutions stop_collector
Upgrade Collector
If you are upgrading from SD-WAN 2.9 to SD-WAN version > 2.9, execute the following step. By default, the .ova image is already running a base configuration of the collector. If the configuration is modified, the following command must be executed.
$ sevone-cli solutions reload
- Extract the latest tar files provided to you by IBM SevOne Production or IBM SevOne Support in /opt/SevOne/upgrade folder.
- Run the following commands.
- $ rm -rf /opt/SevOne/upgrade/utilities
- $ tar xvfz $(ls -Art /opt/SevOne/upgrade/sevone_solutions_sdwan_*.tgz | tail -n 1) -C /opt/SevOne/upgrade/ ./utilities
- $ sudo rpm -Uvh /opt/SevOne/upgrade/utilities/sevone-cli-*.rpm
- Execute $ sevone-cli cluster down command.
- Run $ sevone-cli solutions upgrade --no_guii command. Please run $ sevone-cli solutions upgrade command when upgrading via GUI installer.
'Agent' Nodes in a Not Ready State after Rebooting
Perform the following action if the agent nodes are in a Not Ready state after rebooting.
Ensure SD-WAN collector is 100% deployed
Check the status of the deployment by running the following command. Ensure that everything is in Running status.
$ ssh sevone@<SD-WAN Versa collector 'control plane' node IP address or hostname>
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
solutions-sdwan-versa-redis-master-0 1/1 Running 0 21m
solutions-sdwan-versa-redis-replicas-0 1/1 Running 0 21m
solutions-sdwan-versa-upgrade-sn78p 0/1 Completed 0 21m
solutions-sdwan-versa-aug-decoder-58fc5dfc6d-9l6kw 1/1 Running 0 21m
solutions-sdwan-versa-create-keys-2-cf252 0/1 Completed 0 21m
solutions-sdwan-versa-collector-5c6f7fd4b8-g6k8x 1/1 Running 0 21m
Restart SOA
Please use support user for NMS version 7.0.0.
However, for NMS versions prior to version 7.0.0, please use root user instead of support user.
If SevOne NMS has been upgraded or downgraded, please make sure that the SOA container is restarted after a successful upgrade/downgrade. Execute the following command.
From SevOne NMS appliance,
$ ssh support@<NMS appliance>
$ sudo podman restart nms-nms-soa
Domain Name Resolution (DNS) not working
The DNS server must be able to resolve SD-WAN collector's hostname on both the control plane and the agent nodes otherwise, SD-WAN collector will not work. This can be done by adding your DNS servers via nmtui or by editing /etc/resolv.conf file directly as shown in the steps below.
Hostname | IP Address | Role |
---|---|---|
sdwan-node01 | 10.123.45.67 | control plane |
sdwan-node02 | 10.123.45.68 | agent |
Also, in this example, the following DNS configuration is used and DNS search records, sevone.com and nwk.sevone.com are used.
Nameserver | IP Address |
---|---|
nameserver | 10.168.16.50 |
nameserver | 10.205.8.50 |
- Using ssh , log into the designated SD-WAN collector control plane node and
agent node as sevone from two different terminal windows.
SSH to 'control plane' node from terminal window 1
$ ssh sevone@10.123.45.67
SSH to 'agent' node from terminal window 2
$ ssh sevone@10.123.45.68
- Obtain a list of DNS entries in /etc/resolv.conf file for both control plane and
agent nodes in this example.
From terminal window 1
$ cat /etc/resolv.conf # Generated by NetworkManager search sevone.com nwk.sevone.com nameserver 10.168.16.50 nameserver 10.205.8.50
From terminal window 2
$ cat /etc/resolv.conf # Generated by NetworkManager search sevone.com nwk.sevone.com nameserver 10.168.16.50 nameserver 10.205.8.50
- Ensure that DNS server can resolve SD-WAN collector's hostname / IP address on both the
control plane and the agent nodes along with the DNS entries in
/etc/resolv.conf file (see the search line and nameserver(s)).
From terminal window 1
The following output shows that the DNS server can resolve hostname / IP address on both the control plane and the agent nodes.
Check if 'nslookup' resolves the 'control plane' IP address
$ nslookup 10.123.45.67 67.45.123.10.in-addr.arpa name = sdwan-node01.sevone.com.
Check if 'nslookup' resolves the 'control plane' hostname
$ nslookup sdwan-node01.sevone.com Server: 10.168.16.50 Address: 10.168.16.50#53 Name: sdwan-node01.sevone.com Address: 10.123.45.67
Check if 'nslookup' resolves the 'agent' IP address
$ nslookup 10.123.45.68 68.45.123.10.in-addr.arpa name = sdwan-node02.sevone.com.
Check if 'nslookup' resolves the 'agent' hostname
$ nslookup sdwan-node02.sevone.com Server: 10.168.16.50 Address: 10.168.16.50#53 Name: sdwan-node02.sevone.com Address: 10.123.45.68
nslookup name 'sevone.com' in search line in /etc/resolve.conf
$ nslookup sevone.com Server: 10.168.16.50 Address: 10.168.16.50#53 Name: sevone.com Address: 23.185.0.4
nslookup name 'nwk.sevone.com' in search line in /etc/resolve.conf
$ nslookup nwk.sevone.com Server: 10.168.16.50 Address: 10.168.16.50#53 Name: nwk.sevone.com Address: 25.185.0.4
nslookup nameserver '10.168.16.50' in /etc/resolve.conf
$ nslookup 10.168.16.50 50.16.168.10.in-addr.arpa name = infoblox.nwk.sevone.com.
nslookup nameserver '10.205.8.50' in /etc/resolve.conf
$ nslookup 10.205.8.50 50.8.205.10.in-addr.arpa name = infoblox.colo2.sevone.com.
From terminal window 2
The following output shows that the DNS server can resolve hostname / IP address on both the control plane and the agent nodes.
Check if 'nslookup' resolves the 'agent' IP address
$ nslookup 10.123.45.68 68.45.123.10.in-addr.arpa name = sdwan-node02.sevone.com.
Check if 'nslookup' resolves the 'agent' hostname
$ nslookup sdwan-node02.sevone.com Server: 10.168.16.50 Address: 10.168.16.50#53 Name: sdwan-node02.sevone.com Address: 10.123.45.68
Check if 'nslookup' resolves the 'control plane' IP address
$ nslookup 10.123.45.67 67.45.123.10.in-addr.arpa name = sdwan-node01.sevone.com.
Check if 'nslookup' resolves the 'control plane' hostname
$ nslookup sdwan-node01.sevone.com Server: 10.168.16.50 Address: 10.168.16.50#53 Name: sdwan-node01.sevone.com Address: 10.123.45.67
nslookup name 'sevone.com' in search line in /etc/resolve.conf
$ nslookup sevone.com Server: 10.168.16.50 Address: 10.168.16.50#53 Name: sevone.com Address: 23.185.0.4
nslookup name 'nwk.sevone.com' in search line in /etc/resolve.conf
$ nslookup nwk.sevone.com Server: 10.168.16.50 Address: 10.168.16.50#53 Name: nwk.sevone.com Address: 25.185.0.4
nslookup nameserver '10.168.16.50' in /etc/resolve.conf
$ nslookup 10.168.16.50 50.16.168.10.in-addr.arpa name = infoblox.nwk.sevone.com.
nslookup nameserver '10.205.8.50' in /etc/resolve.conf
$ nslookup 10.205.8.50 50.8.205.10.in-addr.arpa name = infoblox.colo2.sevone.com.
Important: If any of the nslookup commands in terminal window 1 or terminal window 2 above fail or return one or more of the following, you must first resolve the name resolution issue otherwise, SD-WAN collector will not work.Example
** server cant find 67.45.123.10.in-addr.arpa.: NXDOMAIN or ** server cant find 68.45.123.10.in-addr.arpa.: NXDOMAIN or *** Cant find nwk.sevone.com: No answer etc.
If the name resolution fails due to any reason after the deployment of SD-WAN collector, then this could also lead to the failure of normal operations in SD-WAN collector. Hence, it is recommended to ensure that the DNS configuration is always working.
ERROR: Failed to open ID file '/home/sevone/.pub': No such file or directory
As a security measure, fresh installations do not ship with pre-generated SSH keys.
-
Using ssh, log into the SD-WAN Versa collector control plane node as sevone.
$ ssh sevone@<SD-WAN collector 'control plane' node IP address or hostname>
Example
$ ssh sevone@10.123.45.67
-
Execute the following command to generate unique SSH keys for your cluster.
$ sevone-cli cluster setup-keys
Change Collector Log Level
To change the collector log level for any particular agent without redeploying the collector, perform the following steps.
-
Get redis-cli shell on the master node.
$ kubectl exec -it {redis-pod}– redis-cli
-
Publish debug message to the loggingCommand channel.
PUBLISH loggingCommand "{agent-name}:{loglevel}:{logtype}:{minutes}"
Where,
-
agent-name - Defines the name of the agent.
-
loglevel - Defines the log-level for the collector. Value can be info , debug , warning , or error.
-
logtype - Defines the type of logs. Value can be nms / vendor / all.
-
nms - Only NMS API response will be printed in debug logs.
-
vendor - Only vendor (vManage / vDirector) API logs will be printed in debug logs.
-
all - NMS and vendor logs will be printed in debug logs.
-
-
minutes - Defines time (in minutes) for which logs must be printed based on this message. For example, if minutes is set to 5, then debug logs will be printed for 5 minutes based on the message sent from the redis.
Example
PUBLISH loggingCommand "DeviceHealthStreamingAgent:debug:all:1"
-
SSU (Self Service Upgrade)
General
Unable to Start SSU
When installing GUI, if you see the following error, please execute the command export LC_ALL='en_US.utf8' and retry.
Unable to Login
If you see the following error after entering the correct credentials, change the API port to any accessible port from the browser. For more details, please refer to SD-WAN Versa Collector Upgrade Process Guide > FAQs > section Change Ports.
Pre-check Stage Failures
Invalid vDirector Credentials
When doing a pre-check, if you see the following error, please provide the correct Base64 credentials for vDirector.
Invalid NMS API Credentials
When doing a pre-check, if you see the following error, please provide the correct Base64 API credentials for SevOne NMS.
Invalid NMS SSH Credentials
When doing a pre-check, if you see the following error, please provide the correct Base64 SSH credentials for SevOne NMS.
Invalid DI API Credentials
When doing a pre-check, if you see the following error, please provide the correct Base64 API credentials for SevOne Data Insight.
Credentials are not in Base64 Format
When doing a pre-check, if you see the following error, please provide credentials (username & password) in Base64 format for the controller, NMS SSH, NMS API, & DI API.
PAS Sizing Issue
When doing a pre-check, if you see the following error, please review the sizing details and reconfigure the PAS in the collector and then re-trigger the pre-check.
Post-check Stage Failures
Fail to Import OOTB Reports
When doing a post-check, if you see the following error, please provide the correct Base64 API credentials for SevOne Data Insight.
How to collect technical evidence data on SevOne SD-WAN Solutions
IBM SevOne SD-WAN Solution issues that require the technical support engagement on an IBM Support Case, requires the collection of evidence data from the appliances of the SD-WAN Solution. The command-line tool provides a standard method to collect the must-gather data and important evidence required on a support case.
Objective
This article explains how to run the SevOne SD-WAN Solutions support tools for
evidence collection on the SD-WAN Solutions appliances.
Environment
The tool needs to run on master node of the SD-WAN
solutions.
Steps
SevOne SD-WAN Solutions Support tool is included in version 7.0 or higher.
$ sevone-cli solutions support collect [modules]
DESCRIPTION - sevone-cli is a command-line toolkit for SevOne SD-WAN Solutions that provides commands to perform different tasks. For example, collection of evidence that uses the sevone-cli solutions support collect command.
The collect command performs the collection of evidence for the troubleshooting and investigation of the SD-WAN Solutions. The collection is in the form of necessary logs, configuration files, command outputs, and database information to facilitate the troubleshooting of a problem in a consistent way.
MODULES: If module is not specified, then this command will collect data by default for all modules listed in the table below.
Module |
Description |
---|---|
must-gather | Collects all system related data, pods and other status information. |
collector | Collects data related to collector and its logs. |
augmentor | Collects data related to augmenter and its logs. |
installer | Collects data related to the installer (helm list, SSU logs). |
LOGS
$ sevone-cli solutions support collect collector