Integrating IBM Cloud APM 8.x with Tivoli Data Warehouse
Madhavan Vvk 50M53J3X0Q Visits (303)
In IBM Cloud Application Performance Management environment, the historical data is available for viewing up to 24 hours and comparing with up to 32 days in dashboards. Also, there is no APM provided reports for APM agents. For a long-term storage of data and reporting, APM should be integrated with the Tivoli Data Warehouse (TDW). APM V8 agents need to be configured to send data to the Tivoli Data Warehouse and then the data can be retrieved through reports from while implementing this integration.
IBM Cloud Application Performance Management, Private 8.1.4
IBM Tivoli Monitoring v6.3 - WPA, SPA, TEPS
IBM Tivoli Common Reporting
APM has its own internal DB2 database known as WAREHOUS, that it uses to persist data up to 32 days for displaying in the APM console. ITM v6 already has a database WAREHOUS (TDW) to store long-term historical data from ITM agents. Here, we are going to send data from APM agents to the ITM v6 WAREHOUS database. We shall start with creating the ITM v6 WAREHOUS database as follows, assuming that we do not have a ITM v6 environment:
Login as instance owner,
DB2 CREATE DATABASE WAREHOUS on <mountpoint> using codeset utf-8 restrictive
db2 get dbm cfg | grep SYSADM
SYSADM group name (SYSADM_GROUP) = DB2IADM1
-- CREATE a Bufferpool of page size 8K
CREATE BUFFERPOOL ITMBUF8K IMMEDIATE SIZE 2501 PAGESIZE 8 K;
-- CREATE a Regular Tablespace using the 8K Bufferpool
CREATE REGULAR TABLESPACE ITMREG8K PAGESIZE 8 K
MANAGED BY SYSTEM
USING ('itmreg8k')2 BUFFERPOOL ITMBUF8k;
-- CREATE a System tablespace using the 8K Bufferpool
CREATE SYSTEM TEMPORARY TABLESPACE ITMSYS8K PAGESIZE 8 K
MANAGED BY SYSTEM
USING ('itmsys8k')2 BUFFERPOOL ITMBUF8k;
-- CREATE a User tablespace using the 8K Bufferpool
CREATE USER TEMPORARY TABLESPACE ITMUSER8K PAGESIZE 8 K
MANAGED BY SYSTEM
USING ('itmuser8k')2 BUFFERPOOL ITMBUF8k;
db2 -stvf KHD_
db2 "GRANT CONNECT ON DATABASE TO USER itmuser"
db2 "GRANT CREATETAB ON DATABASE TO USER itmuser"
db2 "GRANT USE OF TABLESPACE ITMREG8K TO itmuser"
db2set -i <instance_name> DB2COMM=tcpip
db2 update dbm cfg using SVCENAME <port_number>
where instance_name is the name of the instance in which you created the warehouse database and port_number is the listening port for the instance. (The port number is specified in the file /etc/services.)
Warehouse Proxy Agent needs to run in autonomous mode, it must be configured to run without registering its location with the hub monitoring server.
Note: You will receive the below error in the WPA logs. It gets logged as there is no TEMS connection and can be ignored.
KCII2004E Could not create the records for file "/op
You will have to create history configuration files for all the required agent types. You can find a sample history configuration file on your Cloud APM server. Create your configuration file by copying the sample file and editing the copy. The file includes the data sets (attribute groups) of the agent that can send historical data to Tivoli Data Warehouse. If a particular data set that you are interested in does not exist in the sample file, it is likely because this exact data set does not also exist in the Tivoli Monitoring V6.3 agent product or it is not available for historical data collection. You can remove some of the data sets if you do not want to collect data for them.
The file also contains other sample or default settings. You must modify these settings to configure historical data collection. Do not modify the sample history configuration file, because the next Cloud APM server upgrade installation can overwrite it. Instead, create a copy of the file.
Example: Unix agent
where pc is the two-character product code. For example, ud_history.xml for the Db2® agent and lz_history.xml for the Linux operating system agent.
3. Open pc_history.xml in a text editor.
ip.pipe: For non-secure RPC communication between the agent and the Warehouse Proxy Agent, leave at ip.pipe. For secure RPC communication, change to ip.spipe
#netaddress: Set the IP address or fully qualified host name of the system where the Warehouse Proxy agent is installed. If you use an IP address, add the # sign before the address. If you use a fully qualified host name, make sure the # sign is not present before the host name.
port#: Enter the listening port of the Warehouse Proxy agent. The default port is 63358 for the ip.pipe protocol and 65100 for the ip.spipe protocol. If you want to specify more than one destination or protocol, separate each with a semi-colon (;). For example, you can set the value:
NOTE: The knowledge center says that you can find the value of the warehouse location string in the RAS1 log file on the Warehouse Proxy agent host. The RAS1 log file is located in the Install_Home/logs directory. The file name format is host
A RAS1 log message can look like this:
You will not find this information in the HD log until the WPA is registered to a TEMS.
<HISTORY EXPORT=60 INTERVAL=15 RETAIN=6 TABLE=TABLENAME/>
where TABLENAME is the data set name. For example, if you do not want to send Linux_IP_Address data samples to the Tivoli Data Warehouse, delete the <HISTORY EXPORT=60 INTERVAL=15 RETAIN=6 TABL
In the rows that remain, specify the interval for exporting the data, the interval for collecting the data, and how long to keep the collected samples locally:
This parameter specifies the interval in minutes for exporting historical data to the Tivoli Data Warehouse. Valid export intervals are 15, 30, and values divisible by 60; an interval greater than 60 could be 120, 180, 240, and so on, up to 1440. The export interval must also be divisible by the INTERVAL parameter value. If you enter an invalid value, no historical data is collected nor exported for the specified data set. Default: 60 minutes.
This parameter specifies the historical data collection interval in minutes. The minimum collection interval is 1 minute, and the maximum is 1440 (24 hours). Valid intervals are must divide evenly into 60 or are divisible by 60: 1, 2, 3, 4, 5, 6, 10, 12, 15, 20, and 30; an interval greater than 60 could be 120, 180, 240, and so on, up to 1440. If you enter an invalid value, no history is collected for the specified data set. Default: 15.
This parameter defines the short-term history data retention period in hours, with a one-hour minimum. There is no limit other than that imposed by storage space on the system. After the retention limit has been reached, the agent deletes oldest data samples as new samples arrive. This retention period ensures that, if the agent loses communication with the Tivoli Data Warehouse for some time, history data is not lost. Default: 6 hours.
ux_history.xml – (from a prod env)
<WAREHOUSE LOCATION="ip.pipe:# 9.11
<HISTORY EXPORT="60" INTERVAL="5" RETAIN="3" TABLE="System"/>
<HISTORY EXPORT="60" INTERVAL="5" RETAIN="3" TABLE="Disk"/>
<HISTORY EXPORT="60" INTERVAL="5" RETAIN="3" TABLE="Network"/>
<HISTORY EXPORT="60" INTERVAL="5" RETAIN="3" TABLE="SMP_CPU"/>
The APM server looks for the pc_history.xml files in the inst
Depending on the Export value set in the xml file, you could see attribute tables created on the WAREHOUS database. E.g. Setting a value of 60 – data gets exported to the WAREHOUS DB after 60 mins and you can see the tables at the time of first export. This will confirm that warehousing is properly configured and working.
To configure a Summarization and Pruning agent to run in autonomous mode, complete the following steps:
1. Install a Tivoli Enterprise Portal Server and add application support for all types of monitoring agents that will be collecting historical data.
2. Install and configure Summarization and Pruning agent using the standard procedures.
3. If the Summarization and Pruning agent is not installed on the same machine as the portal server, copy the required application support files to the S & P agent server.
These files are named dockpc, where pc is the two-letter product code for the monitoring agent.
On Windows, the files are in install_dir\cnps directory.
On Unix, the files are in the inst
By default, the Summarization and Pruning agent looks for the application support files in the inst
4. On the machine where the Summarization and Pruning agent is installed, open its environment file in a text editor:
Linux and UNIX: inst
5. Edit the following variables: To enable the Summarization and Pruning agent to run without connecting to the Tivoli Enterprise Portal Server, set KSY_AUTONOMOUS=YES. If you did not install the application support files in the default directory (see step 3), set KSY_
6. Restart the Summarization and Pruning agent agent. The WAREHOUSESUMPRUNE table is automatically created when the Summarization and Pruning agent is started.
You cannot configure the summarization & pruning settings using TEPS as none of the agents report to HTEMS and so cannot be seen in the TEPS console.
Instead, configure them directly in the warehouse database WAREHOUSESUMPRUNE table using SQL commands. The below table contains descriptions of the columns in the WAREHOUSESUMPRUNE table. Insert one row for each attribute group for which you want to collect historical data, along with the values for any summarization and pruning settings. You do not need to set defaults for unused options; they are built into the table design. Varchar values must be enclosed in single quotes (' ').
Configuration, daily/hourly summarization, and daily/hourly pruning
Collection is configured, and daily and hourly summarizations are set. Pruning is specified for daily 3-month intervals and hourly 2-day intervals. Use the SQL INSERT command.
db2 “INSERT INTO WAREHOUSESUMPRUNE (TABNAME, DAYSUM, HOURSUM, PDAY, PDAYINT, PDAYUNIT, PHOUR, PHOURINT, PHOURUNIT, PRAW, PRAWINT, PRAWUNIT) VALUES ('WT
From a production environment:
Below is the output from warehousesumprune table for Linux CPU
db2 "select * from \"WA
TABNAME YEARSUM QUARTSUM MONSUM WEEKSUM DAYSUM HOURSUM PYEAR PYEARINT PYEARUNIT PQUART PQUARTINT PQUARTUNIT PMON PMONINT PMONUNIT PWEEK PWEEKINT PWEEKUNIT PDAY PDAYINT PDAYUNIT PHOUR PHOURINT PHOURUNIT PRAW PRAWINT PRAWUNIT MSLNAME
KLZCPU -16823 -16823 -16822 -16822 -16822 -16822 -16838 1 -16834 -16838 1 -16834 -16837 6 -16835 -16837 3 -16835 -16837 30 -16836 -16837 7 -16836 -16837 7 -16836 #ALL
1 record(s) selected.
Note: The table name which is used in pc_history.xml and TABNAME which is used in WAREHOUSESUMPRUNE table is different.
To enable Linux CPU attribute for historical data collection, the table name would be KLZ_CPU
<HISTORY EXPORT="60" INTERVAL="5" RETAIN="3" TABLE="KLZ_CPU"/>
whereas if you want to set the summarization & pruning settings for Linux CPU in the WAREHOUSESUMPRUNE, the corresponding TABNAME would be KLZCPU
db2 “INSERT INTO WAREHOUSESUMPRUNE (TABNAME, DAYSUM, HOURSUM, PDAY, PDAYINT, PDAYUNIT, PHOUR, PHOURINT, PHOURUNIT, PRAW, PRAWINT, PRAWUNIT) VALUES ('KL
You may find all the table names of the attributes in the summarization logs.
The Tivoli Common Reporting uses the Tivoli Data Warehouse as the source of historical data for generating reports. Refer ITM administrator’s guide for detailed procedure.
To get up and running with reporting, complete the following tasks in the order provided:
Assumption: Tivoli Common Reporting is already installed in the environment.
You must first configure historical data collection. Reports run against long-term historical data that is stored in the Tivoli Data Warehouse. Before you can run reports, ensure that you have installed the required components and configured historical data collection.
To build the resource dimension table, configure historical data collection for one or more of the following attribute groups, depending on the operating system you are getting report for:
Preparing the Tivoli Data Warehouse for Tivoli Common Reporting includes creating the dimension tables, which are required for running the Cognos reports.
You must configure an ODBC connection between the Tivoli Common Reporting server and the TDW server. The purpose of this configuration is to enable Tivoli Common Reporting to retrieve the required data from the TDW database by using a local DB2 client.
A data source named “TDW” is created on the TCR console which creates a connection to the TDW server.
TCR reports of respective agent types are imported within the TCR console.
Finally, you can generate reports for specific agent types for which historical data was configured.