Using the Storage Toolkit
This section includes the scenario for using the Storage Toolkit, the process of using actions in the Storage Toolkit, and the Storage Toolkit limitations and hints.
This section describes the general process of using the Storage Toolkit and a typical scenario for use of the toolkit.
Scenario for using the Storage Toolkit
The Storage Toolkit enables the following scenario:
Phase One: | You create a situation that monitors available free space in your storage groups. Whenever the situation is triggered, you want to find the affected resources and take action. Specifically, you want to find and take action on the datasets with the most unused space. |
Phase Two: | You work in the Tivoli® Enterprise
Portal, instead of accessing
the z/OS® system and performing a manual process on
each resource:
|
Process for using actions in the Storage Toolkit
The following steps describe the general process of using the actions provided by the Storage Toolkit. The options that are displayed in the context menus change, depending on what resources you select in a workspace view in the Tivoli Enterprise Portal or the RDM interface. For example, if you have selected datasets in a workspace view, you see the Dataset Actions option in the context menu.
- Select one or more rows from a view in a workspace. The rows correspond to resources such as volumes and datasets on z/OS systems.
- Right-click within the rows. A context menu is displayed where you can select an appropriate
option. The following example steps describe how to invoke the appropriate Storage Toolkit dialog box when you select
datasets in a workspace view and right-click within the rows:
- Select Dataset Actions from the context menu to access the submenu.
- Select an appropriate dataset action in the submenu, such as Backup. The corresponding dialog box is displayed.
- Make selections in the dialog box to configure the command that you want to issue. These selections are automatically transformed to command syntax that is valid for the mainframe environment. See Storage Toolkit menus and dialog boxes for links to descriptions of all available dialog boxes.
Reusing action requests
The Storage Toolkit provides three methods for you to reuse an action. Consider whether you want to rerun the action using the same or different resources before you choose the method.
- Method 1: Reusing an existing action using the same resources
- Navigate to the Storage Toolkit Action Requests workspace (available from the Storage Toolkit Navigator node).
- Right-click the action in the Action Requests view that you want to reuse, and select Submit Request from the context menu. The dialog box corresponding to the selected action is displayed.
- Modify the settings that control how the request runs as appropriate.
- Click OK. The selected action request is submitted on the z/OS system to which you are connected, and the same set of resources is affected by the action. If the selected action request is associated with groups, then the current volume or dataset resources in that same set of groups are affected by the action.
- Method 2: Reusing an existing action using the same or different resources
- Select one or more rows from a view in a workspace.
- Right-click within the rows, and select Submit Command or Job from the context menu. From the Submit Command or Job dialog box, select the action you want to reuse. The dialog box corresponding to the selected action is displayed.
- Modify settings as appropriate. If you want to reuse the action using the rows you selected in Step 1, ensure that the Use request's default data option in the General tab is not selected. If you want to reuse the action using the resources that were used when the action initially ran, select the Use request's default data option.
- Click OK. The action request is submitted on the z/OS system to which you are connected. The resources that are affected depend on the setting of Use request's default data. Additionally, if the Use request's default data option is selected and the action initially ran against a set of groups, then the current volume or dataset resources in that same set of groups are affected by the action.
- Method 3: Reusing a specific execution of an action
- When you rerun a specific execution of an action, the resources
that are used are the same as those associated with the selected execution.
- Navigate to the Storage Toolkit Action Requests workspace (available through the Storage Toolkit Navigator node).
- Find the action that you want to reuse, and link to the Storage Toolkit Result Summary workspace.
- Right-click the specific result in the Action Results Summary view that you want to reuse, and select Submit Request from the context menu. The dialog box that corresponds to the selected action is displayed.
- Modify settings as appropriate.
- Click OK. The selected action request is submitted on the z/OS system to which you are connected. The same set of resources used by the selected execution are affected by the action. If the selected execution is associated with groups, then the current volume or dataset resources in that same set of groups are affected by the action.
Action requests in workspaces that you create
- Your workspace uses one of the queries that come with the product.
AND
- The query originally had a Storage Toolkit action menu associated with it. These menu options are listed in Storage Toolkit menus and dialog boxes.
About results for action requests
- When you run a command from a command dialog or issue a command from the Issue Command
dialog, the values for the Step Name and Dataset or DD Name are predefined by
the Storage Toolkit.
Command type Step Name Dataset or DD Name DFSMSdss KS3DSS1 SYSPRINT DFSMShsm KS3HSM1 KS3OUT IDCAMS KS3AMS1 SYSPRINT ICKDSF KS3DSF1 SYSPRINT DFSMSrmm KS3RMM1 KS3OUT TSO KS3TSO1 KS3OUT - When you create JCL using the Create Batch Job dialog box, the values for the Step Name and Dataset or DD Name reflect the values that you specified in the Files whose contents should be copied for later viewing fields on the Options tab.
- When you select Copy JCL and JES logs for later viewing on the JCL tab, the
Step Name values are predefined by the Storage Toolkit. The Dataset or DD
Name values are blank.
Step Name Description *JCL The JCL that was submitted *JESMSGLG The contents of the JESMSGLG log *JESJCL The contents of the JESJCL log *JESYSMSG The contents of the JESYSMSG log
- The command that you are running might not return output.
- A file or log to be copied for later viewing is empty.
- You specified
0
as the value for the Maximum output lines option. - The output is not available or the Storage Toolkit cannot capture it.
HSENDCMD WAIT
to issue the
LIST command and route the output to a dataset. Using the Create Batch Job dialog
box, you can then run this JCL and request the dataset to be copied for later viewing.Checking for the completion of action requests
Conditions of the scenario, in sequential order | Your response options |
---|---|
You notice a request on the Tivoli Enterprise Portal that remains in EXECUTING state without completing. | Check the status of the batch job in the z/OS system. |
Problem scenario 1: The batch job has failed due to a JCL error. | Cancel the request on the Tivoli Enterprise
Portal. This action releases
the thread and the resources that have been invoked. (Optional) Attempt to fix the JCL error and run the request again. |
Problem scenario 2: The batch job requires more time to complete. |
|
Storage Toolkit limitations and hints
- Datasets not locked: The Storage Toolkit enables you to click the Edit JCL button in the Create Batch Job dialog box, and edit a dataset that is located in the mainframe environment. However, you must ensure that no one accesses the dataset at the same time, because the dataset is not locked during your editing session. Another user could edit the dataset, for example, in TSO using ISPF, during your editing session. That user would not see a "dataset in use" message; if he saves his changes before you save yours, your changes overlay his. If that user is still editing the dataset when you attempt to save your changes, the attempt to save your changes fails. The results of each editing session might be unpredictable depending on the scenarios and editing tools that the other user uses.
- EBCDIC code page 037 only: The JCL editor provided by IBM Z OMEGAMON AI for Storage supports EBCDIC code page 037 only. The editing or authoring of JCL in other code pages such as EBCDIC code page 930 (in other words, Japanese EBCDIC) is not supported.
- Storage Toolkit first and last
steps: In every batch job, the Storage Toolkit inserts a Toolkit-specific
first step that sets up a monitor over the steps in the job. The monitor collects targeted
output from the steps in the job. The Storage Toolkit also appends a last step to
the end of the job. The last step collects targeted SYSOUT from the previous steps along with
the JES output and the return code. It also notifies the monitoring agent that the batch job
is complete. The Toolkit-specific first step is inserted immediately before the first EXEC,
PROC, or INCLUDE statement that it locates in the JCL. Note: If your JCL has an INCLUDE statement before an EXEC or PROC statement, the INCLUDE member must not contain JCL statements, such as JCLLIB, that must precede the first job step. Because the Toolkit first step is inserted before the INCLUDE, the batch job fails in this case.
- Using the null statement to mark the end of a job: When you use the Create Batch Job
dialog box to execute user-defined JCL, the Storage Toolkit generates a copy of your JCL
to which it adds a Toolkit-specific first step and a Toolkit-specific last step. If your JCL
ends with the null JCL statement denoting end of job,
that null statement is removed from the generated JCL because the job does not end until after the Toolkit-specific last step runs.//
- Conditional processing: Do not specify a COND parameter on the JOB card or on an EXEC statement that might cause the Storage Toolkit steps that were inserted into the JCL to not run. If you use the COND parameter or you use IF/THEN/ELSE/ENDIF statements, you must ensure that the Storage Toolkit first and last steps run.
- No support for multiple jobs in user-defined JCL: When you use the Create Batch Job dialog box to execute user-defined JCL, the JCL must not include multiple jobs. The Storage Toolkit does not support this type of JCL. Results are unpredictable.
- JOB card:
- When you use the Create Batch Job dialog box to execute user-defined JCL, the batch job is submitted using the Replacement JCL JOB card which you specify on the JCL tab. This overrides a JOB card that might be present in the JCL. If you do not specify a Replacement JCL JOB card, your installation-specific JOB card is used.
- Do not specify a CLASS or a TYPRUN option on your JOB card that just copies or scans the job. Because the batch job does not execute, you action request remains in the EXECUTING state. You must cancel the request to release the thread and resources associated with it and to remove it from EXECUTING state.
- Do not specify a COND parameter on the JOB card that might cause the Storage Toolkit first and last steps inserted into the JCL to not run.
- When you request the JES output be copied for later viewing, make sure the MSGLEVEL on your JOB card is set to the level of output that you desire.
- When you specify your JOB card, consider assigning it a unique job name. If the name matches a batch job that is executing on your z/OS system, your job might be delayed until the executing job completes. To prevent this, assign a unique name to your job.
- Using a PROC in user-defined JCL: When you use the Create Batch Job dialog box, you
can run JCL that executes a procedure, however the Storage Toolkit might not be able to properly
copy the contents of files associated with steps in that procedure:
- The procedure can be instream or in a system or private procedure library. If you use a private procedure library, you must ensure the JCLLIB statement precedes the Storage Toolkit first step.
- You can request files that are referenced in the procedure be copied for later viewing,
but with certain limitations:
- The step name is the step name in your JCL that executes the procedure. You cannot specify the step names that are in the procedure itself.
- If the procedure consists of a single step, the contents of the requested files is returned.
- If there are multiple steps in the procedure, the contents of a requested dataset or DD name that references a dataset is returned for each procedure step (in other words, multiple times). The contents of a DD name that is routed to SYSOUT is returned for each procedure step in which the SYSOUT DD name is defined (ie., one or more times).
- Use step names in user-defined JCL: Do not include steps in your user-defined JCL without step names, if you intend to copy files for later viewing that are associated with those steps. The Storage Toolkit requires a step name.
- /*XMIT: Do not use the /*XMIT statement in any of your JCL. The Storage Toolkit does not support this. Results are unpredictable.
- DYNAMNBR: If you submit user-defined JCL that allocates datasets, be aware that the Storage Toolkit allocates datasets in each step, too. If necessary, you might need to use the DYNAMNBR parameter on your EXEC statement to allow for your datasets and 3 Storage Toolkit datasets.
- JCL errors: The Storage Toolkit last step appended to the end of each batch job notifies the monitoring agent when the batch job completes. When this occurs the action request that is pending completion of the job is updated with the results of the batch job. If the last step in the batch job does not run, for example, the batch job failed with a JCL error or conditional processing bypassed the last step, the action request remains in an EXECUTING state. If you determine that an action request is in EXECUTING state longer than you anticipate, you must check the status of the batch job on the z/OS system. If the job failed such that the last step did not run, you must cancel the execution of the action request in the Tivoli Enterprise Portal. This releases the thread and resources associated with the request and removes it from EXECUTING state. You can then determine why the job failed, correct the error and resubmit the request.
- Return codes: Certain return codes, which are generally paired with a status, are
set by the Storage Toolkit when it detects
an error processing an action request. The following table lists common return codes and their
corresponding status:
If you see a return code that does not look familiar, you might want to convert it to its hexadecimal equivalent, because the code might indicate an abend in the batch job. For example, a return code of
Table 1. Common Storage Toolkit return codes Return code Status 117
This status typically indicates that the JCL exceeds 72 characters when the substitution variables are applied. It might also indicate other JCL-related errors, such as a missing JOB card, or another dataset requiring variable substitution that exceeds 80 characters when the substitution variables are applied. 119
The User Data Server has ended abnormally or the batch job has ended, but the Storage Toolkit is unable to determine the return code. 121
Authorization has failed. 123
A dataset error has occurred, such as the dataset containing your JCL does not exist. Messages in the RKLVLOG of the Tivoli Enterprise Monitoring Server might help you analyze this result. 125
The execution of the action request was stopped because the action request was associated with a set of groups that did not exist or were empty at the time of execution. The status that is displayed in the Result Summary workspace is set to one of the following values:- NonexistentGroups
- Indicates that none of the groups associated with the request existed.
- EmptyGroups
- Indicates that all of the groups associated with the request were empty.
- BadGroups
- Indicates that a combination of empty and missing group errors affected the entire set of groups. This status value might also indicate that some other error was detected when the groups were processed. Review the messages in the RKLVLOG to assist your analysis of the results.
Note: Groups might be empty because a collection is running or has not yet run. If so, retry the request when the collection completes.193
is the same asx'0C1'
. - Variable substitution and line limits: Substitution variables that are defined
through the Storage Toolkit are
replaced when the action request runs. The Storage Toolkit creates temporary datasets to
contain the updated statements. There are two basic categories of datasets (JCL and Other) that are updated with substitution variables. The Storage Toolkit processes them as follows:
- JCL dataset: Variable substitution is applied to all components of the batch
job, including the JOB card, extra JCL, the Toolkit-specific steps, and the body of your
JCL along with any instream datasets. The Storage Toolkit interprets column 72 as a
continuation character and preserves its contents. The data between columns 2 and 71 might
shift left or right depending on the size of the variable name and its substitution value.
If the data shifts beyond column 71, the request fails. The return code for the request is
set to 117 and the status for this execution isInvalidJCL . You must perform these
actions:
- Verify the substitution variables and values are correct and do not have unintentional consequences to the components of the batch job
- Correct the JCL to ensure that no line exceeds the limit.
- Other dataset: Variable substitution is applied to all records in other datasets
that you specify as needed by the job that also contains substitution variables. The
toolkit makes no assumptions about the contents of the dataset and considers each line,
from column 1 to column 80, as a line of data. Variable substitution may cause the data in
columns 2 through 80 to shift left or right depending on the size of variables names and
their values. If the data shifts beyond column 80 (excluding trailing blanks), the request
fails. The return code for the request is set to 117 and the status for this execution is
InvalidJCL. You must perform these actions:
- Verify that the substitution variables and values are correct and do not have unintentional consequences to the contents of the dataset.
- Correct the contents of the dataset to ensure that no line exceeds the limit.
- JCL dataset: Variable substitution is applied to all components of the batch
job, including the JOB card, extra JCL, the Toolkit-specific steps, and the body of your
JCL along with any instream datasets. The Storage Toolkit interprets column 72 as a
continuation character and preserves its contents. The data between columns 2 and 71 might
shift left or right depending on the size of the variable name and its substitution value.
If the data shifts beyond column 71, the request fails. The return code for the request is
set to 117 and the status for this execution isInvalidJCL . You must perform these
actions:
- Validating JCL: When you write JCL for use in the Create Batch Job dialog box, always check the validity of the statements before you submit the batch job. For example, when you edit JCL in the Edit JCL dialog box, consider whether line lengths will exceed the 72-byte limit after variable substitution is performed. When substitution variables are replaced in the JCL at execution time, resultant JCL lines that contain more than 72 bytes causes the JCL to not be submitted. A status of InvalidJCL is displayed in the Result Summary workspace for the action request.
- Reserved variable names: The Storage Toolkit reserves the following
variable names. You must not use these names for your own
variables:
%%KS3TK_CMD_DSN%% %%KS3TK_HSM%% %%KS3TK_DYNAMNBR%%
- Fully Qualified Datasets needed by the job that also contain substitution variables: When you use the Create Batch Job dialog box, you can specify additional datasets that contain substitution variables. The Storage Toolkit creates a temporary dataset with the updates and replaces the name of the original dataset with the temporary one in its copy of your JCL. In order for the names to be replaced, the datasets must be referenced in your JCL; they cannot be in cataloged procedures or INCLUDE members that your JCL might use.
- JES output:
- The techniques that the Storage Toolkit uses to collect JES logs and system-handled output datasets (SYSOUT) require your z/OS operating system use JES2.
- Because the Storage Toolkit last step collects JES output just before the batch job ends, some of the messages you normally see in the JES logs such as the job start (IEF375I) and job end (IEF376I) messages are not included in the JES output.
- Mainframe commands:
- Mainframe console commands are submitted through an SDSF batch job interface. A forward
slash (/) must precede the command, as in this example, which cancels a time-sharing user
(
tso_user_ID
):/C U=tso_user_ID
- Command output is not returned for Mainframe console commands because execution of the command is not synchronized with execution of the batch job.
- Because execution of the command is not synchronized with execution of the batch job, the return code associated with the action request reflects the submission of the command to the SDSF batch job interface. It does not reflect the execution of the command itself.
- Because the Storage Toolkit uses SDSF, your z/OS operating system must use JES2.
- Mainframe console commands are submitted through an SDSF batch job interface. A forward
slash (/) must precede the command, as in this example, which cancels a time-sharing user
(
- Shared DASD: The temporary datasets that the Storage Toolkit creates to contain the
generated JCL, the results dataset, and other files are shared between the Toolkit and the
batch job. Because the batch job can run on a z/OS
system in your SYSPLEX different from the one where the monitoring agent runs, the temporary
datasets must be created on DASD shared across the systems. Your installation can control the
location of the temporary datasets using options in PARMGEN. These options also control the
location of datasets created using the Edit JCL option in the Create Batch Job dialog box.
In addition, when you use the Create Batch Job dialog box, you specify the dataset containing the JCL you wish to submit and, optionally, specify datasets needed by the job that also contain substitution variables. These datasets must be cataloged and located on on-line DASD that is accessible to the z/OS system where the monitoring agent runs.
- APF-authorized load library on remote systems: The Storage Toolkit inserts a first step and last step into every batch job. These steps run Toolkit code that is located in the TKANMODL load library for your installation's run time environment. The load library must be APF-authorized. If the batch job runs on the same z/OS system as the monitoring agent, the load library is normally already APF-authorized. If your batch job runs on another z/OS system in your SYSPLEX, you must ensure the load library is APF-authorized on that system as well. The load library must also be located on DASD that is shared across the systems.
- Unprintable characters: You must ensure that the files you specify as Files whose contents should be copied for later viewing on the Create Batch Job dialog box or the output from a command on the Issue Command dialog box will contain character data only. If the files or command output contain unprintable characters (for example, hexadecimal data), these characters might not display properly in the Storage Toolkit Result Detail workspace.
- Checkpoint dataset storage exhausted: When you submit an action request, information
about the request is stored in the checkpoint database. When the request completes, results of
the execution are also stored. The information stored in the checkpoint database includes
elements such as:
- The name and description of the request
- The time the request was submitted and the time it completed
- The resources associated with the request
- The return code from the execution of the request
- The output produced by the execution of the request, which might include:
- Command output
- Files copied for later viewing
- The submitted JCL
- The JES files produced by the batch job.
Note: If the results from the execution of an action request exceed the free space available in the checkpoint database, the output is lost entirely. The error message KS3T830E SERVICE CHECKPOINT DATASET STORAGE EXHAUSTED in the RKLVLOG of the Tivoli Enterprise Monitoring Server indicates this condition. The IBM® OMEGAMON® for Storage on z/OS: Troubleshooting Guide provides further information about this issue.