ncp_storm_validate.sh

Use this script as a troubleshooting aid for the Apache Storm realtime computation system, which is used to aggregate raw poll data into historical poll data.

Syntax

The ncp_storm_validate script uses the following syntax:
$NCHOME/precision/scripts/ncp_storm_validate.sh [storm] testName testArgs
Where:
  • storm is an optional argument used to additionally use Storm scripts to trigger Java code. By default, the Java code is triggered directly to reduce unhelpful messages. Add the optional storm argument to run this script using the Storm scripts.
  • testName is the name of the test to run.
  • testArgs are the arguments required by that test.

Examples

Here are some examples of how to run the script:
Display the current default configuration
$NCHOME/precision/scripts/ncp_storm_validate.sh config
Display the configuration for the named topology, "NMAnotherTopology"
$NCHOME/precision/scripts/ncp_storm_validate.sh config NMAnotherTopology
Validate access to the NCPOLLDATA database using existing credentials
$NCHOME/precision/scripts/ncp_storm_validate.sh db
Validate access to the NCPOLLDATA database using existing credentials but by additionally using the Apache Storm scripts
$NCHOME/precision/scripts/ncp_storm_validate.sh storm db
V4.2 Fix Pack 1: Delete all historical poll data aggregated by Storm
V4.2 Fix Pack 1:
$NCHOME/precision/scripts/ncp_storm_validate.sh clear -aggregate

Command-line options

The following table describes the command-line options for the ncp_storm_validate script. In all cases, [topology_name] defaults to NMStormTopology if not provided.

Table 1. ncp_storm_validate.sh command-line options
Command-line option Description
V4.2 Fix Pack 1: clear [topology_name] [-raw | -aggregate | -all] V4.2 Fix Pack 1: Deletes historical poll data. You will be prompted to confirm this action before any data is deleted. You can delete the following historical poll data:
  • -raw: deletes the raw poll data stored by the poller, and as yet unprocessed by Storm. In practice, this is the last hour's data from the pollData and pollBatch tables within the NCPOLLDATA database.
  • -aggregate - deletes the historical poll data aggregated by Storm. This is data older than an hour and includes all of the data in all of the historical poll data tables in the NCPOLLDATA database. These are the tables with the prefix pdEwmaFor; for example pdEwmaForDay.
  • -all - deletes both raw and aggregated data.
Warning: This option is intended for use during setup and testing only, and is not for use during production use.
config [topology_name] Displays the specified Storm topology. If you do not specify a topology name, then the command displays the default topology NMStormTopology.
crypt [topology_name] -password password [-decrypt] Encrypts or decrypts a password from logs. Displays the plain text decrypted password.
Note: This option is not compatible with the DbLogins.cfg configuration file. Use the ncp_crypt command for working with that file.
db [topology_name] Validates access to the NCPOLLDATA database. Tries to connect to the database using the existing credentials. Running this script with this option produces similar output to the ncp_db_access.pl script, but also validates the JDBC connection.
dbconfig [topology_name] Displays NCPOLLDATA database configuration. Identifies the JDBC URLand username used to access the database. This information is defined by the DbLogins.cfg configuration file for the named domain and is potentially overridden by the jdbc.url value from the backend tnm.properties configuration file.
dblogins [topology_name] Loads the backend DbLogins.cfg file using JNI.
JNI loading of the DbLogins configuration is a two-step process.
  1. The path to the C++ library (libNcpDbJni.so) is given in the java.library.path property of the $NCHOME/precision/storm/conf/storm.yaml file. That path value should never need to be modified.
  2. To load up the library, the library path (LD_LIBRARY_PATH on linux, LIBPATH on AIX) must be set to pick up further C++ dependencies. This should happen by default based on settings picked up automatically from $NCHOME/precision/bin/ncp_common.
hb [topology_name] Displays the current status of the Apache Storm master table in the NCPOLLDATA database. Use this option to identify the master topology, and show the current timestamp. By running the script with this option over a number of minutes, you are able to display an incrementing batch identifier, and this indicates that the poller is continuously storing data.
hbconfig [topology_name] Displays heartbeat configuration for the Apache Storm master table in the NCPOLLDATA database.
V4.2 Fix Pack 1: kafkaexport V4.2 Fix Pack 1: Exports sample data for selected topics. The topic selected determines the output format. For a full description of the parameters for this option see Exporting sample data using the kafkaexport option.
V4.2 Fix Pack 1: kafkaimport V4.2 Fix Pack 1: Listens for data requests on a specified kafka topic or for a specified number of seconds. For a full description of the parameters for this option see Listening for data requests using the kafkaimport option.
keyfile [keyFileName] Validates an existing encryption key or creates a new one if no key exists.
V4.2 Fix Pack 1:

Exporting sample data using the kafkaexport option

The kafkaexport option uses the following syntax:
kafkaexport [topology_name] -topic topic [-clientid clientid] [-instanceid monitoredInstanceId] [-objectid monitoredObjectId] [-value value] [-message messageString]
Here are some examples of how to export sample data using the kafkaexport option.
Note: The topic names are case sensitive. In normal use you do not need to be concerned with topic names. A scenario in which you do need to take care, however, is when troubleshooting with the ncp_storm_validate.sh script. In this case be aware that you must type the topic name exactly as spelled here; that is, all lowercase, for example: nm.monitoredobject.
V4.2 Fix Pack 1: Export test data to kafka
V4.2 Fix Pack 1:

The format is hard coded based on the topics configured. The following examples show relevant options for different topic configurations.

V4.2 Fix Pack 1:
ncp_storm_validate.sh kafkaexport -topic nm.polldata
ncp_storm_validate.sh kafkaexport -topic nm.polldata -value 10 -instanceid 4 -objectid 3
ncp_storm_validate.sh kafkaexport -topic nm.monitoredinstance
ncp_storm_validate.sh kafkaexport -topic nm.monitoredinstance -instanceid 4
ncp_storm_validate.sh kafkaexport -topic nm.monitoredobject
ncp_storm_validate.sh kafkaexport -topic nm.monitoredobject -objectid 3
V4.2 Fix Pack 1: Request a full table dump
V4.2 Fix Pack 1:

The dump option requires Apache Storm to be running. The following examples show relevant options for different topic configurations.

V4.2 Fix Pack 1:
ncp_storm_validate.sh kafkaexport -topic nm.datarequest -message monitoredobject
ncp_storm_validate.sh kafkaexport -topic nm.datarequest -message monitoredinstance
Table 2. Parameters for the kafkaexport option
Command-line option Description
topology_name Exports sample data from the specified Storm topology. If you do not specify a topology name, then the command exports sample data from the default topology NMStormTopology.
-topic topic The topic for which you want to export poll data.
Options are:
  • nm.datarequest
  • nm.monitoredinstance
  • nm.monitoredobject
  • nm.polldata
-clientid clientid This identifier is autogenerated by the system and is logged in the relevant Apache Storm log file. You do not normally need to specify this value.
Note: If you need to troubleshoot issues, then you can specify a value for clientid by modifying the value logged in the the relevant Apache Storm log file.
-instanceid monitoredInstanceId Dummy value that can be specified if an nm.polldata or nm.monitoredinstance topic was specified. The value specified here will be exported.
Note: This value is not used to look up a row in the NCIM topology database.
-objectid monitoredObjectId Dummy value that can be specified if an nm.monitoredobject topic was specified.
Note: This value is not used to look up a row in the NCIM topology database.
-value value Can be used with the topic nm.polldata to specify a dummy value for test purposes.
-message messageString Can be used with the topic nm.datarequest to identify the type of data requested. It currently must be either 'monitoredobject' or 'monitoredinstance', triggering a full dump of the named table in each case.
V4.2 Fix Pack 1:

Listening for data requests using the kafkaimport option

The kafkaimport option uses the following syntax:
kafkaimport [topology_name] -topic topic [-runseconds runseconds] [-groupid groupId]
Here are some examples of how to listen for data requests using the kafkaimport option. The format of the output depends on the topics configured.
Note: The topic names are case sensitive. In normal use you do not need to be concerned with topic names. A scenario in which you do need to take care, however, is when troubleshooting with the ncp_storm_validate.sh script. In this case be aware that you must type the topic name exactly as spelled here; that is, all lowercase, for example: nm.monitoredobject.
ncp_storm_validate.sh kafkaimport -topic nm.monitoredinstance
ncp_storm_validate.sh kafkaimport -topic nm.monitoredobject -runseconds 100000
ncp_storm_validate.sh kafkaimport -topic nm.polldata
Table 3. Parameters for the kafkaimport option
Command-line option Description
topology_name Listens for data requests from the specified Storm topology. If you do not specify a topology name, then the command listens for data requests on the default topology NMStormTopology.
-topic topic The topic for which you want to listen for data requests.
Options are:
  • nm.datarequest
  • nm.monitoredinstance
  • nm.monitoredobject
  • nm.polldata
-runseconds runseconds The number of seconds for which you want to listen for data requests.
-groupid groupId
This identifier is autogenerated by the system and is logged in the relevant Apache Storm log file. You do not normally need to specify this value.
Note: If you need to troubleshoot issues, then you can specify a value for groupid by modifying the value logged in the the relevant Apache Storm log file.