health network-performance
command
Check the health of a IBM Guardium® Data Security Center deployment on a Red Hat® OpenShift® cluster with iPerf. These checks include detecting issues that are related to network performance and network access failures.
Prerequisites
- Log in to Red Hat OpenShift Container Platform as a cluster administrator.
- Verify the configuration file.Important: A Kubernetes configuration file must be at either ~/.kube/config or ~/auth/kubeconfig for this command to work. The file must have the target cluster as the current context.
Syntax and command options
The following example shows the syntax that you must use when you run the
network-performance
command:
guardcenter-cli health network-performance \
[--image-prefix=<image-registry-prefix>] \
[--image-tag=<image-tag> \
[--log-level=debug|trace] \
[--minbandwidth=<MBytes/sec>] \
[--save] \
[--verbose]
Configure the following command options when you run the
network-performance
command:
Option | Description |
---|---|
|
Display command help.
|
--image-prefix |
Specify the image registry prefix.
|
--image-tag |
Specify an image tag for the image that you specified in the --image-prefix option.
|
--log-level |
The command log level.
|
|
The minimum amount of bandwidth that is accepted between nodes. The value is in MB per second.
|
--save |
Save the output and resource YAML files to the local file system.
|
--verbose |
Display detailed information about resources in table format.
|
Example 1: Basic network performance check
Run the following command for a standard
check:
guardcenter-cli health network-performance
The following example shows a successful output with some truncated
lines:
############################################################################
NETWORK PERFORMANCE REPORT
############################################################################
Namespace ibm-network-performance created
Daemonset ibm-network-performance-ds created successfully in namespace ibm-network-performance
Daemonset ibm-network-performance-ds started successfully in namespace ibm-network-performance
server node: worker2.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com
wait about 1 minute for the iperf server to be up and running...
worker5.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com client -> server worker2.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com: 367 MBytes/sec.
[SUCCESS...]
worker1.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com client -> server worker2.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com: 390 MBytes/sec.
[SUCCESS...]
worker0.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com client -> server worker2.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com: 356 MBytes/sec.
[SUCCESS...]
worker3.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com client -> server worker2.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com: 392 MBytes/sec.
[SUCCESS...]
worker8.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com client -> server worker2.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com: 364 MBytes/sec.
[SUCCESS...]
worker4.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com client -> server worker2.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com: 342 MBytes/sec.
[FAIL...]
worker6.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com client -> server worker2.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com: 391 MBytes/sec.
[SUCCESS...]
worker7.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com client -> server worker2.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com: 384 MBytes/sec.
[SUCCESS...]
worker9.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com client -> server worker2.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com: 373 MBytes/sec.
[SUCCESS...]
server node: worker5.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com
wait about 1 minute for the iperf server to be up and running...
worker2.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com client -> server worker5.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com: 361 MBytes/sec.
[SUCCESS...]
... (messages truncated)...
server node: worker9.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com
wait about 1 minute for the iperf server to be up and running...
worker2.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com client -> server worker9.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com: 329 MBytes/sec.
[FAIL...]
worker5.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com client -> server worker9.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com: 334 MBytes/sec.
[FAIL...]
worker1.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com client -> server worker9.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com: 334 MBytes/sec.
[FAIL...]
worker0.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com client -> server worker9.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com: 343 MBytes/sec.
[FAIL...]
worker3.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com client -> server worker9.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com: 317 MBytes/sec.
[FAIL...]
worker8.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com client -> server worker9.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com: 349 MBytes/sec.
[FAIL...]
worker4.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com client -> server worker9.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com: 322 MBytes/sec.
[FAIL...]
worker6.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com client -> server worker9.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com: 345 MBytes/sec.
[FAIL...]
worker7.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com client -> server worker9.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com: 340 MBytes/sec.
[FAIL...]
Deleting Namespace ibm-network-performance ...
Deleting Namespace ibm-network-performance ...
Network performance info gathered successfully!
Example 2: Network performance check with verbose output
The following command checks the network performance and produces a detailed
output:
guardcenter-cli health network-performance --verbose
The following example shows a successful output with some truncated
lines:
############################################################################
NETWORK PERFORMANCE REPORT
############################################################################
Namespace ibm-network-performance created
Daemonset ibm-network-performance-ds created successfully in namespace ibm-network-performance
Daemonset ibm-network-performance-ds started successfully in namespace ibm-network-performance
INFO[0011] selector: app=ibm-network-performance-ds
INFO[0011] pod name: ibm-network-performance-ds-2tf58
INFO[0011] pod name: ibm-network-performance-ds-6b6j6
INFO[0011] pod name: ibm-network-performance-ds-7xvkn
INFO[0011] pod name: ibm-network-performance-ds-b6qxn
INFO[0011] pod name: ibm-network-performance-ds-jtknd
INFO[0011] pod name: ibm-network-performance-ds-k9dtw
INFO[0011] pod name: ibm-network-performance-ds-pfgpc
INFO[0011] pod name: ibm-network-performance-ds-vpplh
INFO[0011] pod name: ibm-network-performance-ds-xg55l
INFO[0011] pod name: ibm-network-performance-ds-z24md
server node: worker9.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com
wait about 1 minute for the iperf server to be up and running...
level=info msg="command: iperf3 -c 10.254.17.28 -f M"
level=info msg="Namespace: ibm-network-performance, PodName: ibm-network-performance-ds-6b6j6, ContainerName: ibm-network-performance-ds"
Connecting to host 10.254.17.28, port 5201
[ 5] local 10.254.48.53 port 45516 connected to 10.254.17.28 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 319 MBytes 319 MBytes/sec 436 673 KBytes
[ 5] 1.00-2.00 sec 328 MBytes 328 MBytes/sec 53 675 KBytes
[ 5] 2.00-3.00 sec 293 MBytes 293 MBytes/sec 16 781 KBytes
[ 5] 3.00-4.00 sec 340 MBytes 340 MBytes/sec 151 825 KBytes
[ 5] 4.00-5.00 sec 336 MBytes 336 MBytes/sec 119 767 KBytes
[ 5] 5.00-6.00 sec 309 MBytes 309 MBytes/sec 103 753 KBytes
[ 5] 6.00-7.00 sec 310 MBytes 310 MBytes/sec 32 613 KBytes
[ 5] 7.00-8.00 sec 341 MBytes 341 MBytes/sec 127 649 KBytes
[ 5] 8.00-9.00 sec 290 MBytes 290 MBytes/sec 50 695 KBytes
[ 5] 9.00-10.00 sec 325 MBytes 325 MBytes/sec 123 341 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 3.12 GBytes 319 MBytes/sec 1210 sender
[ 5] 0.00-10.04 sec 3.11 GBytes 318 MBytes/sec receiver
iperf Done.
worker5.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com client -> server worker9.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com: 319 MBytes/sec.
[FAIL...]
... (messages truncated)...
level=info msg="command: iperf3 -c 10.254.47.85 -f M"
level=info msg="Namespace: ibm-network-performance, PodName: ibm-network-performance-ds-xg55l, ContainerName: ibm-network-performance-ds"
Connecting to host 10.254.47.85, port 5201
[ 5] local 10.254.36.234 port 54990 connected to 10.254.47.85 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 364 MBytes 364 MBytes/sec 437 686 KBytes
[ 5] 1.00-2.00 sec 352 MBytes 352 MBytes/sec 239 725 KBytes
[ 5] 2.00-3.00 sec 346 MBytes 346 MBytes/sec 20 779 KBytes
[ 5] 3.00-4.00 sec 327 MBytes 327 MBytes/sec 175 671 KBytes
[ 5] 4.00-5.00 sec 374 MBytes 374 MBytes/sec 37 498 KBytes
[ 5] 5.00-6.00 sec 398 MBytes 398 MBytes/sec 144 886 KBytes
[ 5] 6.00-7.00 sec 414 MBytes 414 MBytes/sec 18 900 KBytes
[ 5] 7.00-8.00 sec 413 MBytes 413 MBytes/sec 101 638 KBytes
[ 5] 8.00-9.00 sec 408 MBytes 408 MBytes/sec 195 583 KBytes
[ 5] 9.00-10.00 sec 384 MBytes 384 MBytes/sec 96 936 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 3.69 GBytes 378 MBytes/sec 1462 sender
[ 5] 0.00-10.04 sec 3.69 GBytes 376 MBytes/sec receiver
iperf Done.
worker4.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com client -> server worker7.sre-sva-3m10w-fips-nfs-1.cp.example.ibm.com: 378 MBytes/sec.
[SUCCESS...]
Deleting Namespace ibm-network-performance ...
Deleting Namespace ibm-network-performance ...
Network performance info gathered successfully!
Example 3: Network performance check with --save
option
Run the following command to check network performance and save the output to your
system:
guardcenter-cli health network-performance --save
The
--save
option creates one server log per node with the file format of
iperf_server__result.txt, such as
iperf_server_worker0.example.ibm.com_result.txt.level=info msg="Executing command in the background: iperf3 -s -f M" level=info msg="Namespace: ibm-network-performance, PodName: ibm-network-performance-ds-hw7kd, ContainerName: ibm-network-performance-ds" ----------------------------------------------------------- Server listening on 5201 ----------------------------------------------------------- Accepted connection from 10.254.20.226, port 41434 [ 5] local 10.254.47.86 port 5201 connected to 10.254.20.226 port 41450 [ ID] Interval Transfer Bitrate [ 5] 0.00-1.00 sec 370 MBytes 370 MBytes/sec [ 5] 1.00-2.00 sec 425 MBytes 425 MBytes/sec [ 5] 2.00-3.00 sec 419 MBytes 419 MBytes/sec [ 5] 3.00-4.00 sec 415 MBytes 415 MBytes/sec [ 5] 4.00-5.00 sec 396 MBytes 396 MBytes/sec [ 5] 5.00-6.00 sec 402 MBytes 402 MBytes/sec [ 5] 6.00-7.00 sec 353 MBytes 353 MBytes/sec [ 5] 7.00-8.00 sec 354 MBytes 354 MBytes/sec [ 5] 8.00-9.00 sec 312 MBytes 312 MBytes/sec [ 5] 9.00-10.00 sec 398 MBytes 398 MBytes/sec [ 5] 10.00-10.04 sec 14.4 MBytes 386 MBytes/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate [ 5] 0.00-10.04 sec 3.77 GBytes 384 MBytes/sec receiver ----------------------------------------------------------- Server listening on 5201 ----------------------------------------------------------- Accepted connection from 10.254.31.203, port 52004 [ 5] local 10.254.47.86 port 5201 connected to 10.254.31.203 port 52008 [ ID] Interval Transfer Bitrate [ 5] 0.00-1.00 sec 366 MBytes 366 MBytes/sec ... (messages truncated) ...