IBM Support

ITM6: How to handle very big STH files

Technical Blog Post


Abstract

ITM6: How to handle very big STH files

Body

It happens, from time to time, that someone complains about problems in exporting historical data from ITM servers to datawarehouse db (TDW db).
There are several reasons that may lead to this behavior and one of them is too big binary historical data files, the so called short-term history (STH) files.
When you set up historical data collection for attribute groups you need to do the right choice for Location of historical data files, Collection Interval (how often the historical data are written into the Location defined) and the Warehouse Interval (how often the historical data are exported from the Location decided to the datawarehouse database).
If historical data are collected too quickly with respect to when they are exported to TDW db, then the STH files will increase their size.
If there is a network problem between the Location of STH files and the TDW db, the historical data cannot be moved to TDW db hence the STH files will increase their size.
If Warehouse Proxy Agent (WPA) is not working properly, then the export process will not work and the STH files will increase their size.
What to do when this happens?

First of all you need to solve any network or WPA issue otherwise you will encounter the problem of big STH files again and again.
Once this is done, you may observe that the historical data are not exported yet and the STH files continues to enlarge. What's happening then?

Let's give an example.
In the Location for the STH files, you may have files like:

04.07.2017  07:01     2.374.179.536 QMCH_LH
02.03.2016  11:00             1.681 QMCH_LH.hdr
04.07.2017  07:01    16.841.112.840 QMCONNAPP
10.04.2017  14:18               986 QMCONNAPP.hdr
04.07.2017  07:01     2.892.612.424 QMERRLOG
02.03.2016  11:07               592 QMERRLOG.hdr
04.07.2017  07:00    25.398.956.492 QMMQIMDET
02.03.2016  11:15               469 QMMQIMDET.hdr
04.07.2017  07:00     7.754.872.960 QMQ_LH
02.03.2016  11:15             1.488 QMQ_LH.hdr

These files are too large (>1GB) and WPA is not able to handle them reliably and so they are not sent to the proper TDW db tables.
Those files were not so large initially and everything was working but then at some point WPA could not cope with the huge size and all the warehousing process stopped working successfully.

In this scenario here is a suggestion that should allow you to start working again:
1) use the "itmcmd history" command to move the historical binary files into flat files (which can also be delimited files) so that you can then import them into your database using DB commands. This way you will not have gaps.
Refer to
https://www.ibm.com/support/knowledgecenter/SSTFXA_6.3.0/com.ibm.itm.doc_6.3/cmdref/candlehistory.htm
and
https://www.ibm.com/support/knowledgecenter/SSTFXA_6.3.0/com.ibm.itm.doc_6.3/adminuse/historyconvertflat_intro.htm

If you are on a Windows machine you should not have the itmcmd interface and so you need to use krarloff executable directly.

krarloff syntax is:
krarloff $opts -m $table.hdr -r $table.old -o $roll $table

This is an output example for a Windows environment:
C:\IBM\ITM\TMAITM6_x64>krarloff -h -d ";" -m "C:\IBM\ITM\TMAITM6_x64\logs\History\KMQ\QM041031\QMMQISTAT.hdr" -o "C:\IBM\ITM\TMAITM6_x64\logs\History\KMQ\QM041031\QMMQISTAT.out" -s "C:\IBM\ITM\TMAITM6_x64\logs\History\KMQ\QM041031\QMMQISTAT"

Or if you run krarloff from the directory where history binary file and hdr file reside (this is linux but should be the same):
# krarloff -h -d ";" -m KLZLOGIN.hdr -o KLZLOGIN.out -s KLZLOGIN
Current codepage is 1208
Source file is : KLZLOGIN
Definition file is : KLZLOGIN.hdr
Source file will be renamed to : KLZLOGIN.old
Destination file is : KLZLOGIN.out
Delimiter set as : ";"
Rename succeeded : KLZLOGIN being renamed to KLZLOGIN.old
 

2) remove the needed huge STH files, eg. QM* and QM*.hdr files above (you should have already moved them into flat files)
3) then you need to review the historical collection settings and be sure you are limiting the collection only to the strictly needed attributes.
You can increase Collection Interval and shorten the Warehouse Interval so that data will be written less frequently on the server and more often into the TDW db.

After that you can restart the involved agents and WPA and you may see that new STH files are created and export occurs at the time expected.

Hope this helped.
Walter
 

 

 

Tutorials Point

 

Subscribe and follow us for all the latest information directly on your social feeds:

 

 

image

 

image

 

image

 

 

  

Check out all our other posts and updates:

Academy Blogs:https://goo.gl/U7cYYY
Academy Videos:https://goo.gl/TLfMoF
Academy Google+:https://goo.gl/HnTs0w
Academy Twitter :https://goo.gl/AhR8CL


image

 

[{"Business Unit":{"code":"BU053","label":"Cloud & Data Platform"},"Product":{"code":"","label":""},"Component":"","Platform":[{"code":"","label":""}],"Version":"","Edition":"","Line of Business":{"code":"","label":""}}]

UID

ibm11085289