Welcome to the Application Performance Management Blog, where you can read the perspectives from APM experts. This Blog provides insights into the Application Performance Management solution, as well as technical details about specific IBM products.
This video will show you how to migrate to Tivoli Data Warehouse range partition.
You will learn from this demo:
- How to migrate your existing Tivoli Data Warehouse on DB2 to range partition, so that you can take advantage of the significant performance improvement on pruning and query;
- How to verify that the migration is successful.
Direct link to YouTube: https://www.youtube.com/watch?v=siWtESryXPM
Check out all our other posts and updates:
Academy Blogs: ... [More]
As ITM 6.3 fix pack 3 has just been released, I know many of you will be planning upgrades in the near future. Here are a few tips and features that can really improve the speed, capacity usage and capabilities of warehousing in ITM. There is even a neat little feature to send data directly to analytics tools outside of the warehouse entirely, thus offsetting load on your database (win, win).
This blog is tacking a slightly different tact to the how to/deep dive information I usually post, but I feel this... [More]
ITM 6.3.0 introduces Tivoli Data Warehouse significant improvements in pruning and querying large Tivoli Data Warehouse databases by allowing to partition database tables . Range partitioning permits the fast roll out of data without having to perform a resource intensive DELETE operation on blocks of rows. Range partitioning also improves query performance when the partitioning key is part of the query clause. Performance is improved because the database can discard unnecessary partitions from the list of blocks that must be fetched from... [More]
Table Range Partitioning
Range partitioning is a database feature that allows for quick rollout of data from a table. Instead of deleting a set of rows at a time and having to commit frequently to avoid filling the transaction log, a whole day partition can be quickly detached and removed. This partition detach/removal takes seconds instead of hours that it could take when the table is not range partitioned.
The Warehouse Proxy Agent (WPA) and Summarization and Pruning Agent (SPA) both support this new... [More]
The Warehouse Proxy Agent(WPA) Server has two data connections to establish at startup. The first one is to the database being used to warehouse data. The second, in which we will discuss in this blog post, is the communication port that the WPA is listening on for all agents to connect, that are warehousing data. These agents when started have to make a Remote Procedure Call(RPC) connection to the WPA listening port to establish a data path for historical data. The way in which this happens is a follows:
Taking a technical break this week on Warehouse best practices. The last six blog entries had to do with the Warehouse Proxy client that runs as part of the agent. This also included the TEMA code that saved the historical data in the short term history(STH) files. The next series of blog entries will be about the Warehouse Proxy Server or Warehouse Proxy Agent(WPA). The WPA is responsible for listening on a port for historical data from the Warehouse Proxy clients(agents). The WPA also has to connect to the database being used for warehousing... [More]
The Warehouse Proxy agent provides a set of works spaces that are defined as “Manage Our Stuff With Our Stuff”(MOSWOS). IBM/Tivoli love acronyms and MOSWOS has been around a long time internally. But, it is a term that usually causes confusion for a user of ITM. This confusion extents further for the Warehouse Proxy agent in the sense that most users do not know that these MOSWOS workspaces exist. The Warehouse MOSWOS workspaces can be used to look for problems occurring in historical data collection. ... [More]
In my previous blog entry , I detailed how to configure historical collection so that the least amount of data is saved and the export of that data is the quickest. All for keeping the size of the short term history file below 1GB. However, when saving historical data at the TEMA the agent can become isolated from the Warehouse Proxy agent because of network issues. Or the database itself may be down not allowing exports. When this happens, the STH files start growing and can exceed the 1GB limit over time. At the TEMS, this can also... [More]
In my previous blog entry
we discovered that each short term history(STH) file must be kept
under 1GB in size for performance reasons at the TEMA (Tivoli
Enterprise Monitoring Agent) or TEMS (Tivoli Enterprise Monitoring
Server). As with any smartphone data plan it is always good to have
ways to try and not exceed this limit. And for STH files the same
applies. Here are ways to achieve this: Use
the largest collection interval possible. This reduces the number
of rows saved in the STH file. Filter
any unneeded data from the... [More]
You have just bought your
first smart phone. You then have to decide on how to use 'less data'
using your data plan(unless you are on an unlimited plan). You
google 'how to use less data' and see there are thousands of posts on
how to do this. In terms of Tivoli ITM warehousing, the same concept
applies, how to keep the short term history(STH) files at a
reasonable size. This requires for large data intensive agents to
think about the configuration of the historical collection interval,
warehousing upload interval, and retention... [More]
Robert Frost said in “The Road Not Taken” : “Two roads diverged in a wood, and I,I took the one less traveled by, And that has made all the difference” For historical collection of data within the Tivoli Framework, the user has to decide for each table collected if it should be saved at the TEMA (Tivoli Enterprise Monitoring Agent) or TEMS (Tivoli Enterprise Monitoring Server). The default location is the TEMA, but we have found that users like to move this to the TEMS so that all of their data is located on one machine.... [More]
Big Data as described in Wikipedia is the future but right now for Tivoli Data Warehousing(TDW) the task is to understand the amount of data being collected and warehoused by the Tivoli agents in your ITM infrastructure. This analysis can be simplified using the Tivoli Warehouse Projection Spreadsheet that is located here: Warehouse Projection Spreadsheet and has been uploaded to this blog entry . The ITM 6.2.3 Installation and Setup Guide contains information on estimating the disk space requirements for the Tivoli Data Warehouse. To follow... [More]
The Tivoli Data Warehouse utilizes a very simple and flat database schema. This simple schema means that certain queries may not perform well, depending on how they are written (complexity, joins, order by, columns used in the WHERE clause, etc...). This article describes some of the best practices when setting up your Tivoli Data Warehouse. Data Collection only collect the data that you need - collecting unneeded data will take resources in the database and will require extra maintenance in the form of pruning, etc... It will also increase... [More]
When you think to plan Tivoli Monitoring Data Warehouse solution, first step is to understand what type of data analysis should be done. Before enabling historical collection think about your business requirement for the data. There are four common use cases for the historical data. Your needs will vary for each attribute group, so consider the use cases for each attribute group when configuring historical collection: Problem Determining and Debugging: Reporting Capacity Planning/Predictive Alerting Adaptive Monitoring. Each of these... [More]