TOKEN command
The TOKEN command has three syntax variations:
- Tokenize an explicit data set or a list of data sets.
- Compare against a previous tokenization in the repository.
- Delete tokens from the repository.
Syntax
TOKEN=(LIST | explicit-data-set
[,MODS | WRITE | NOWRITE | REMOVE])
Parameters
- explicit-data-set
- This can specify a ddname (
DD=ddname), a cataloged data set (DSN=data-set-name), or an uncataloged data set (DSN=data-set-name, VOL=volume-serial). - LIST
- Specify a list of data set names generated by a previous command such as EXPLICIT or PATTERN.
- MODS
- Compare with previous tokenization.
- NOWRITE
- Write tokens to the token file.
- REMOVE
- Remove tokens from the repository.
- WRITE
- Write tokens to the repository.
Usage
There are different formats for using the TOKEN=LIST command for processing data sets:
- Only the Administrator can tokenize into the repository. The LIST is generated by a prior command such as SHOW=TOKENS.
- The external tokenization file is nominated using a SYSUT2 DD which must have attributes of: PS, FB, 4096, 4096.
- A CLOSE command must follow a TOKEN=(LIST, NOWRITE) to indicate that tokenization has concluded.
- The report in TOKEN=(LIST,MODS) can be directed to an OUTFILE DD which has attributes of: PS,FB,80,8000. The report content is influenced by previous options such as OPTIONS=MODS or OPTIONS=CHANGEONLY.
- Only the Administrator can remove a previous tokenization from the repository.
- TOKEN=(DSN=, NOWRITE | MODS | REMOVE) operates on a specific data set rather than a prepared list to tokenize a data set, compare a data set, or to remove a specific data set token from the repository.
- Only the NOWRITE option produces a token file that can be later compared using REMOTEGROUPCOMP. All other variations of the TOKEN command to tokenize, compare and delete tokens operate against the repository.
- The repository contains one token per data set member, and different data sets may be tokenized at different times, so a comparison report reflects the change in each data set since it was last tokenized. External token files provide a more flexible way to maintain multiple snap shots of possibly multiple environments.
- The LIST in the TOKEN=LIST command is built by previous PATTERN, INCLUDE, EXCLUDE and SHOW=LIST commands or a SHOW=TOKENS command.
- When the TOKEN command is provided with no options, the default behavior is to tokenize into the repository. This is equivalent to TOKEN=(LIST,WRITE) for lists and TOKEN=(DSN,WRITE) for a particular data set.
- When tokenizing an entire volume, warning errors like message CYG303W can occur for members in any data sets on the volume which are in use at the time. If a message like this occurs, the specified member will not be tokenized correctly. This can be avoided by ensuring that none of the data sets are in use when the tokenization is performed. If it this is not possible or practical, the affected data sets can be subsequently tokenized individually.
- Full volume tokenization can be used, for example, by system programmers when they have the task of building a new SYSRES. The SYSRES volume can be tokenized before the new SYSRES is built and then again afterwards to identify member-level changes and new data sets. These changes can then be made to the current live SYSRES in a controlled fashion.
- The TOKEN command with the NOWRITE parameter enables tokenizing directly to an external file. NOWRITE provides a better mechanism for taking a snapshot of an environment at one time, whereas tokens saved in the control file for different data sets could have been created at different times.
To avoid this problem, use the TOKEN=(LIST,NOWRITE) command to store the tokens in a user-specified data set. This is called external tokenization. The REMOTEGROUPCOMP command can then be used to compare these files and report on deleted or newly allocated data sets.
Example 1: Tokenizing explicit data sets
Generate the tokens for two data sets and store them in the z/OS Change Tracker VSAM Control File.
TOKEN=(DSN=SYS1.PARMLIB,VOL=RES001)
TOKEN=(DSN=SYS1.PROCLIB,VOL=RES001)
Example 2: Tokenizing and storing the tokens into an external token file
Tokenization of a data set referenced by a SYSUT1 DD, and write the tokens to a file referenced by a SYSUT2 DD.
//SYSUT1 DD DISP=OLD,DSN=SYS1.PROCLIB
//SYSUT2 DD DISP=SHR,DSN=IBMUSER.TOKENS.PROCLIB (PS,FB,4096,4096)
TOKEN=(DD=SYSUT1,NOWRITE)
CLOSE
Example 3: Tokenizing a set of data sets whose name matches a pattern
PATTERN=SYS1.LINK*
PATTERN=SYS1.PROCLIB
SHOW=LIST
*
TOKEN=LIST
Example 4: Tokenizing data sets on two DASD volumes
To tokenize all data sets on a specific DASD volume, the INCLUDE command must be used as shown in Example 4. Multiple include commands can be specified for tokenizing multiple DASD volumes.
INCLUDE=(DSN=*,VOL=RES001)
INCLUDE=(DSN=*,VOL=RES002)
SHOW=LIST
*
TOKEN=LIST
Example 5: Tokenizing a DASD volume and writing tokens to an external file
Tokenize a list of data sets and write the tokens to an external file referenced by a SYSUT2 DD. The list is established by multiple PATTERN and INCLUDE commands.
//SYSUT2 DD DISP=SHR,DSN=IBMUSER.TOKENS (PS,FB,4096,4096)
INCLUDE=(DSN=*,VOL=RES001)
EXCLUDE=SYS1.FONTLIB
...
SHOW=LIST
*
TOKEN=(LIST,NOWRITE)
CLOSE
Example 6: MODS reporting for explicit data sets
The MODS parameter is used to identify all the changes since the tokens were prepared.
TOKEN=(DSN=SYS1.PARMLIB, VOL=RES001, MODS)
TOKEN=(DSN=SYS1.PROCLIB, VOL=RES001, MODS)
Example 7: MODS reporting for a set of data sets matching a pattern
Example 7 shows how to identify the MODS (any changes) made to a group of pre-tokenized data sets over a period. Tokens reflecting the current content of each data set member are compared against the stored tokens in the control file. These newly generated tokens are used for comparison and are not stored in the control file.
OPTION=MODS
*
PATTERN=SYS1.LINK*
PATTERN=SYS1.PROCLIB
SHOW=LIST
TOKEN=(LIST,MODS)
Example 8: MODS identification for a set of data sets matching a pattern
Example 8 illustrates how the OPTION=CHANGEONLY command is used to focus on reporting only the changed data sets and the corresponding changed members within those data sets.
//OUTFILE DD DISP=SHR,DSN=HLQ.CYG.MODS <-- Required DD
OPTION=CHANGEONLY
*
EXCLUDE=SYS1.BRODCAST
EXCLUDE=SYS1.ICEDGTM
EXCLUDE=SYS1.DUMP* <-- Pattern exclusion EXCLUDE=SYS1.LOGREC
EXCLUDE=SYS1.RACF
EXCLUDE=SYS1.VTOCIX.* <-- Pattern exclusion
*
PATTERN=SYS1.*
SHOW=LIST
*
TOKEN=(LIST,MODS)
Example 9: Refreshing the tokens in the control file
When there are existing pre-tokenized data sets in the Control File, to refresh those tokens to reflect the current content of data set members, the following commands may be used.
SHOW=TOKENS
TOKEN=(LIST)
Example 10: Identifying all changes for all previously tokenized data sets
The SHOW=TOKENS command generates a list of tokenized data sets which currently exist in the z/OS Change Tracker Control File. Then, the TOKEN=(LIST,MODS) command works against this generated list to determine the MODS.
SHOW=TOKENS
TOKEN=(LIST,MODS)
Example 11: REMOVE selected tokens from the repository
To judiciously remove stored tokens for an explicit data set, the following command may be used.
TOKEN=(DSN=SYS1.PARMLIB,VOL=RES001,REMOVE)
Example 12: Force the removal of a token for data set that no longer exists
OPTION=FORCED
TOKEN=(DSN=SYS1.USERLIB.ABC,REMOVE)
Example 13: Tokenization multiple DASD volumes, storing tokens externally
Multiple INCLUDE commands build a LIST of data sets on one or multiple DASD volumes.
The NOWRITE parameter indicates tokenization to an external token file.
//SYSUT2 DD DISP=SHR,DSN=IBMUSER.TOKENS (PS,FB,4096,4096)
INCLUDE=(DSN=*,VOL=RES001)
INCLUDE=(DSN=*,VOL=RES002)
INCLUDE=(DSN=*,VOL=RES002)
EXCLUDE=SYS1.FONTLIB
...
SHOW=LIST
*
TOKEN=(LIST,NOWRITE)
CLOSE
Example 14: MODS identification of changes for an established LIST
PATTERN=SYS1.PARMLIB.*
PATTERN=SYS1.PROCLIB.*
EXCLUDE=SYS1.PARMLIB.TEST
TOKEN=(LIST,MODS)
Related commands
Related information
CYGC* jobs in the samples library.
