ncp_storm_validate.sh
Use this script as a troubleshooting aid for the Apache Storm realtime computation system, which is used to aggregate raw poll data into historical poll data.
Syntax
Thencp_storm_validate
script uses the following syntax:$NCHOME/precision/scripts/ncp_storm_validate.sh [storm] testName testArgs
Where:storm
is an optional argument used to additionally use Storm scripts to trigger Java code. By default, the Java code is triggered directly to reduce unhelpful messages. Add the optional storm argument to run this script using the Storm scripts.- testName is the name of the test to run.
- testArgs are the arguments required by that test.
Examples
Here are some examples of how to run the script:- Display the current default configuration
-
$NCHOME/precision/scripts/ncp_storm_validate.sh config
- Display the configuration for the named topology, "NMAnotherTopology"
-
$NCHOME/precision/scripts/ncp_storm_validate.sh config NMAnotherTopology
- Validate access to the NCPOLLDATA database using existing credentials
-
$NCHOME/precision/scripts/ncp_storm_validate.sh db
- Validate access to the NCPOLLDATA database using existing credentials but by additionally using the Apache Storm scripts
-
$NCHOME/precision/scripts/ncp_storm_validate.sh storm db
- Delete all historical poll data aggregated by Storm
-
$NCHOME/precision/scripts/ncp_storm_validate.sh clear -aggregate
Command-line options
The following table describes the command-line options for the
ncp_storm_validate script. In all cases, [topology_name]
defaults to NMStormTopology
if not provided.
Command-line option | Description |
---|---|
clear [topology_name] [-raw | -aggregate |
-all] |
Deletes historical poll data. You will be prompted to confirm this action
before any data is deleted. You can delete the following historical poll data:
Warning: This option is intended for use during setup and testing only, and is not
for use during production use.
|
config
[topology_name] |
Displays the specified Storm topology. If you do not specify a topology name, then the command displays the default topology NMStormTopology. |
crypt [topology_name] -password
password [-decrypt] |
Encrypts or decrypts a password from logs. Displays the plain text decrypted
password. Note: This option is not compatible with the DbLogins.cfg configuration file. Use the
ncp_crypt command for working with that file.
|
db
[topology_name] |
Validates access to the NCPOLLDATA database. Tries to connect to the database using the existing credentials. Running this script with this option produces similar output to the ncp_db_access.pl script, but also validates the JDBC connection. |
dbconfig
[topology_name] |
Displays NCPOLLDATA database configuration. Identifies the JDBC URLand username used to access the database. This information is defined by the DbLogins.cfg configuration file for the named domain and is potentially overridden by the jdbc.url value from the backend tnm.properties configuration file. |
dblogins [topology_name] |
Loads the backend DbLogins.cfg file using JNI. JNI loading of the DbLogins
configuration is a two-step process.
|
hb
[topology_name] |
Displays the current status of the Apache Storm master table in the NCPOLLDATA database. Use this option to identify the master topology, and show the current timestamp. By running the script with this option over a number of minutes, you are able to display an incrementing batch identifier, and this indicates that the poller is continuously storing data. |
hbconfig
[topology_name] |
Displays heartbeat configuration for the Apache Storm master table in the NCPOLLDATA database. |
kafkaexport |
Exports sample data for selected topics. The topic selected determines the output format. For a full description of the parameters for this option see Exporting sample data using the kafkaexport option. |
kafkaimport |
Listens for data requests on a specified kafka topic or for a specified number of seconds. For a full description of the parameters for this option see Listening for data requests using the kafkaimport option. |
keyfile
[keyFileName] |
Validates an existing encryption key or creates a new one if no key exists. |
Exporting sample data using the kafkaexport option
The
kafkaexport
option uses the following
syntax:kafkaexport
[topology_name] -topic topic [-clientid clientid] [-instanceid monitoredInstanceId] [-objectid monitoredObjectId] [-value value] [-message messageString]
Here are some examples of how to export sample data using the kafkaexport
option.
Note: The topic names are case sensitive. In normal use you do not
need to be concerned with topic names. A scenario in which you do need to take care, however, is
when troubleshooting with the
ncp_storm_validate.sh
script. In this case be aware
that you must type the topic name exactly as spelled here; that is, all lowercase, for example:
nm.monitoredobject
.- Export test data to kafka
-
The format is hard coded based on the topics configured. The following examples show relevant options for different topic configurations.
- Request a full table dump
-
The dump option requires Apache Storm to be running. The following examples show relevant options for different topic configurations.
Command-line option | Description |
---|---|
topology_name |
Exports sample data from the specified Storm topology. If you do not specify a topology name, then the command exports sample data from the default topology NMStormTopology. |
-topic
topic |
The topic for which you want to export poll data. Options are:
|
-clientid
clientid |
This identifier is autogenerated by the system and is logged in the relevant
Apache Storm log file. You do not normally need to specify this value. Note: If you need to
troubleshoot issues, then you can specify a value for
clientid by modifying the
value logged in the the relevant Apache Storm log file. |
-instanceid
monitoredInstanceId |
Dummy value that can be specified if an nm.polldata or nm.monitoredinstance
topic was specified. The value specified here will be exported. Note: This value is not used to look
up a row in the NCIM topology database.
|
-objectid
monitoredObjectId |
Dummy value that can be specified if an nm.monitoredobject topic was
specified. Note: This value is not used to look up a row in the NCIM topology
database.
|
-value
value |
Can be used with the topic nm.polldata to specify a dummy value for test purposes. |
-message
messageString |
Can be used with the topic nm.datarequest to identify the type of data requested. It currently must be either 'monitoredobject' or 'monitoredinstance', triggering a full dump of the named table in each case. |
Listening for data requests using the kafkaimport option
The
kafkaimport
option uses the following
syntax:kafkaimport
[topology_name] -topic topic [-runseconds runseconds] [-groupid groupId]
Here are some examples of how to listen for data requests using the kafkaimport
option. The format of the output depends on the topics configured.
Note: The topic names are case
sensitive. In normal use you do not need to be concerned with topic names. A scenario in which you
do need to take care, however, is when troubleshooting with the
ncp_storm_validate.sh
script. In this case be aware that you must type the topic
name exactly as spelled here; that is, all lowercase, for example:
nm.monitoredobject
.ncp_storm_validate.sh kafkaimport -topic nm.monitoredinstance
ncp_storm_validate.sh kafkaimport -topic nm.monitoredobject -runseconds 100000
ncp_storm_validate.sh kafkaimport -topic nm.polldata
Command-line option | Description |
---|---|
topology_name |
Listens for data requests from the specified Storm topology. If you do not specify a topology name, then the command listens for data requests on the default topology NMStormTopology. |
-topic
topic |
The topic for which you want to listen for data requests. Options are:
|
-runseconds
runseconds |
The number of seconds for which you want to listen for data requests. |
-groupid
groupId |
This identifier is autogenerated by the system and is logged in the relevant Apache Storm log
file. You do not normally need to specify this value.
Note: If you need to troubleshoot issues, then
you can specify a value for
groupid by modifying the value logged in the the
relevant Apache Storm log file. |