IBM Support

ICF Catalog Management Recommendations & Guidelines

Question & Answer


Questions and Answers about Integrated Catalog Facility (ICF) catalogs



Our z/OS systems need immediate and long-term catalog maintenance (reorgs, mergecats, etc.) We need direction for our path because of the size and scope of the effort. In addition to our own large internal z/OS systems, we also provide insourcing and outsourcing services and bring in clients whose catalog management is in various stages of neglect. They typically run on older versions of z/OS & OS/390. We would be appreciative of any guidance you are able to provide along to help us be assured of an effective path to catalog management.

Multiple Parallel Sysplex & Monoplex Systems
z/OS 1.3.0 / DFSMS 1.3.0
Large disc storage and tape storage environments are duplexed to a remote data center.
Three DFDSS or FDR backups of each catalog per day.
Our catalogs number in the hundreds on each sysplex, with an aggregate total in the thousands. The catalogs come in all sizes and usage, small to large.

1) What is the recommended method / tool to evaluate catalog growth from catalog insert, delete, update activity of data sets and their perspective alias entries?

2) What are the pros and cons and recommendations concerning placement of catalogs? Should we place multiple, low-growth, low-activity catalogs on one volume? Should we spread them across several volumes?

3) It is obvious a catalog needs attention when the volume it resides on is running out of space. However, what performance or availability-related metrics should be evaluated to determine a reorg is truly necessary when space on a volume is not a factor?

4) Are there negative performance implications to be considered as a result of a reorg of a catalog? Does performance degrade until a certain amount of CI / CA split activity occurs?)

5) What is the recommended free space specifications for catalog CIs and CAs?

6) What are the important define parameters, in addition to freespace, related to performance and availability of ICF catalogs? Do you have some recommendations?

7) We have avoided the use of multi-level aliases. We hope to continue with single level aliases unless excessive growth from a single alias forces us into the use of multi-level aliases. What are the gotchas, pros and cons of multi-level alias implementations?

8) What technical publications would you recommend we read and review to verify we are using a sound path to managing our catalogs.

9) We are aware of a limited number of technical manuals including:
z/OS DFSMS Managing catalogs, and
z/OS DFSMS Access Method Services for catalogs.
Are there other manuals we should be studying?

10) Perhaps I've missed significant or helpful items relating to this subject (that is, coupling facility exploitation, CAS, etc...), offer any suggestions if you are so inclined.
Thank you in advance for your help.

IBM Response:

Rule One - stay current on maintenance. Even if your customers are running older levels of OS/390 and z/OS, you still need to keep an eye on any maintenance that is marked HIPER. From the DFSMS perspective if the apar or ptf is flagged with the keyword DSBREAKER or CATBREAKER we are not kidding. 

Rule Two - Catalog reorgs should be used to consolidate extents, resize catalogs, remove and or change define parameters. CI or CA splits are not usually an indication of a need to reorg.

By resize catalogs I am referring to the case where you move 50,000 entries out of a catalog that has 100,000 entries.

For change or remove parameters what we are talking about is removing the imbed and replicate attributes from your catalogs, and by using reasonable data and index cisizes (for the data cisize, we'd recommend a multiple of 4096 and 4096 for the index cisize). If a catalog has many large records (GDG's with a number of candidate volumes, or other data sets that span multiple volumes) you get better performance by going to a larger data cisize.  

Take a look at STRNO, BUFND, and BUFNI for your catalogs. 

STRNO should be set high enough to avoid seeing the catalog name on the RMF contention report for the resource SYSZRPLW.

For BUFND multiply the value of STRNO multiplied by either 2 or 3.

For BUFNI compute the value by taking the number of levels in the index + 1, then multiply by the value for STRNO. 

After the installation of the PTFs for apar OA25072 (for releases 1.9 through 1.11). the catalog address space calculates an optimum value for BUFND and BUFNI.

Always define a secondary allocation amount for your catalogs. If the catalog needs to extend and there is no secondary you get to reorg the catalog on the spot, but don't get carried away and specify a secondary of 100 cylinders as freeing 100 cylinders on a volume might not be easy to do while freeing 1 to 10 cylinders usually is much easier.  

Rule Three - CI/CA splits are not necessarily bad.

An example, You have a new application with a new high-level qualifier in a new catalog. This application has a mix of sequential data sets, GDG's and VSAM data sets.

Day one of the applications these get defined into the new catalog and most likely they will not be defined in data set name order (usually define jobs define all the sequential data sets, then all the of GDG's, then all the VSAM data sets and any databases used (IMS or DB2).

After the initial defines, you will probably see some number of CI and CA splits. Don't panic. The splits have created room in the catalog where you will probably need it. After the application has reached a steady state (that is, some number of new data sets will get defined because they were forgotten in the initial defines and the GDG's have reached their limits) this is the time to look at the sizing and possibly a reorg.

There are other cases where the catalog is constantly showing an increasing number of CI/CA splits, usually accompanied by a growth in the number extents. This often occurs because of the naming convention chosen for data set names. If the only added records to the catalog are in one range of values, you will see many CI/CA splits. If you include date and time as a part of the data set name try specifying time before date to help spread the inserts throughout the entire catalog. Use of the CA RECLAIM feature available in z/OS 1.12 can also reduce the number of CI and CA splits and the growth of extents. 

Rule Four - have catalog forward recovery software and be familiar with how it works and test that it does work and that there is recovery JCL ready for each catalog so that if you do need to recover you are not reading the manual and writing JCL at that point.

It should be a case of submit one or more jobs to recover a catalog.

IBM includes ICFRU and there are several other products on the market. The products vary from bare bones to having many features (and the price varies). If your catalog forward recovery product requires SMF records, consider splitting the required records into their own data set/GDG when dumping SMF records from the SMF man data sets or logstreams. Processing 20,000,000 records on 12 or more tapes to get the 1000 records you need to forward recover makes your recovery time much longer.

Now to your specific questions (if I haven't answered them previously).

A1). Use IDCAMS LISTCAT NAMES (or the CSI sample in SYS1.SAMPLIB) which will give an idea of how many entries are in a catalog. From this report, you can get an idea of how many data sets there are and where growth is occurring.

A2). Catalogs perform best when they are isolated on their own volumes, disc controller, dedicated channels. Now you can't do that for all catalogs, so in general we recommend that you try to avoid multiple catalogs on a volume (but you might be able to get away with grouping small, lightly used catalogs on a volume) and do not place catalogs on volumes with other high activity data sets, such as JES checkpoint, RACF DB, SMS ACDS, HSM CDS's, RMM CDS's.

A3). See Rule Three. Catalogs should (at some point) get to a "steady state" though in the case of some products, such as report archiving software that might take years (that is, we need to keep copies of this report for two years, five years, seven years). If the CI/CA split count and extents consistently go up, you can look at the data set naming conventions used, or splitting out certain aliases. If you can't change naming conventions, moving those which cause a problem to their own catalog should cut down on the catalogs you need to keep a close eye on.

A4). See Rule Three. Reorganizing a catalog probably means you see the CI/CA splits immediately jump up, but it should settle down.

A5). There are no recommended freespace values as it depends on the use of the catalog. If it only contains historical data sets and there are no adds occurring you can get away with a value of freespace(0,0). If you have one of those catalogs that should change the naming convention, but you can't, then specifying a large value might help somewhat.

A6). See Rule Two.

A7). There is some additional processing cpu usage with multilevel aliases, but it can provide significant benefits.

Say you have a naming convention as follows A.PROD, A.TEST, A.QA. You can provide different levels of service, availability, access by pointing the three aliases listed each to a separate catalog. If the catalog that the alias A.PROD points to breaks, you know you need to fix it now, whereas the others might have different availability requirements.  

If you have different performing disc, make sure the prod catalog is on the good performing disc. From a security perspective, you can restrict access to the various catalogs as required by people's job function. Also, you might have the case where a new catalog is created for each year for an application and a new alias (something like A.Y04, A.Y05 for the aliases). If you are using a scheme like this you can separate out the active data from the data that is rarely if ever touched.

Also, with multilevel aliases ensure your catalog names have one more qualifier than the aliaslevel. For example, if the alias level is set to 2, then all catalogs should have at least three qualifiers for their names (that is, SYS1.USERCAT.ABLE). If the aliaslevel is set to three, then all the catalogs should have four qualifiers (that is, SYS1.USERCAT.ABLE.PLEX). If the aliaslevel is set to four, then all the catalogs should have five qualifiers. If the catalog names are less than or equal to the aliaslevel in some cases, they are treated as aliases and their contents searched when a locate request is issued. This can cause extra I/O and CPU usage.

A8). We'd recommend two Redbooks "VSAM Demystified" and "ICF Catalog Backup and Recovery: A Practical Guide" available at (search for catalog).

A10). From a performance perspective, anything you can do to avoid disc I/O, or slower I/O will help. Putting catalogs into VLF should improve disc performance for the BCS. Use of ECS (Enhanced Catalog Sharing) will reduce the I/O's to the VVDS. Use of SMSVSAM for RLS access to catalogs reduces the number of enqueues and allows better buffering. Use of GRS Star rather GRS ring does wonders for performance. Both ECS and GRS Star do require a coupling facility. If you use MIM rather than GRS, have MIM use the coupling facility rather than CTC's or a control data set. If you have single system plexes (i.e. monoplexes) that do not share disc and do not share catalogs (and NEVER WILL) then you can get improved performance by specifying GRS=NONE is IEASYSxx and defining/altering your catalogs to have shareoptions(3,3). If there is a possibility that the disc may be shared (or the catalogs) do not, repeat, DO NOT specify GRS=NONE and shareoptions(3,3). If you have a single system and have set GRS=NONE and changed the shareoptions to (3,3) and now another system needs to be added to the original, be aware that there will be a performance impact. It can be lessened, by tuning, use of GRS Star, VLF, RLS, but there will be some additional processing cpu usage and enqueues incurred caused by the need of the two systems to communicate and extra actions taken by the software to ensure integrity.

[{"Business Unit":{"code":"BU054","label":"Systems w\/TPS"},"Product":{"code":"SWG90","label":"z\/OS"},"Component":"5695DF105 - DFSMS CATALOG","Platform":[{"code":"PF035","label":"z\/OS"}],"Version":"1.1;1.2;1.3;1.4;1.5;1.6;1.7;1.8;1.9;1.10;1.11;1.12;1.13;2.1;2.2;2.3;2.4","Edition":"","Line of Business":{"code":"LOB16","label":"Mainframe HW"}}]

Document Information

Modified date:
03 September 2021