Software systems workloads can always be articulated as a mix of two flavors of computing tasks:
OLTP is the computing activity of serving requests originated by interactive users waiting for an answer in a acceptable time; whereas, batch processing refers to programs running in the background trying to finish the given amount of work in the designated window of time. The key performance metric for OLTP system is the response time experienced by the end user, while for batch systems, the reference metric is the throughput - the number of tasks completed in the unit of time. See H. H. Liu: Software Performance and Scalability: A Quantitative Approach – J. Wiley & Sons, for a more detailed discussion.
One of the benefits that the Java platform has introduced is the fact that it manages the memory for you, through the usage of the Garbage Collection (GC) algorithm. Unfortunately, a not well tuned GC can result in sub-optimal performance and scalability. The initial step of the GC tuning is choosing the GC policy according to the characteristics of the workload. Today's JVMs support multiple GC algorithms that you can control thought the command line options. The IBM JVM support 4 different policies through the -Xgcpolicy: option:
optthruput - It is the default policy and is typically used for applications where raw throughput is more important than short GC pauses. The application is stopped each time that garbage is collected.
optavgpause - Trades high throughput for shorter GC pauses by performing some of the garbage collection concurrently. The application is paused for shorter periods.
gencon - Handles short-lived objects differently than objects that are long-lived. Applications that have many short-lived objects can see shorter pause times with this policy while still producing good throughput.
subpool - Uses an algorithm similar to the default policy's but employs an allocation strategy that is more suitable for multiprocessor machines. It is recommend this policy for SMP machines with 16 or more processors and is only available on IBM pSeries® and zSeries® platforms.
Ideally optavgpause should be the best choice for pure OLTP systems, while optthruput is better suited for batch processing (take a look to Java technology, IBM style: Garbage collection policies, Part 1 and Part 2 articles on developerWorks for a deeper analysis of garbage collection policies).
TPAe based systems are actually a mix of the two workload profiles and their behavior is characterized by the presence of a lot of transient objects. Based on these considerations, Tivoli performance decided to test the gencon policy that is able to provide a good trade-off between short pause times and good overall throughput. Results achieved in all the experiments that we did were very encouraging. As you can see in the following graph about one of the scenarios we run, the response time improved and became more stable, as is shown by a reduced standard deviation:
The gencon policy was also beneficial for the CPU utilization. Also from this point of view we observed a lower and more stable demand, as you can see clearly in the following graphs (default policy and gencon):
IBM invites you to join the new Customer Collaboration Program!
The Tivoli Workload Automation (TWA) 8.6 Customer Collaboration Program (CCP) - which focused on Release 8 Version 6 of TWSz and TWSd - has successful closed.
This program involved Sixty-seven (67) companies in the development of the release and more then 64 enhancements were integrated into TWA 8.6.
TWA development helds 30 Webcast session for demo session and design review and also 45 Usability session.
IBM is glad to announce that the CCP will continue to build with TWA experts and users to ensure the successful evolution of the TWA family and invites you to join the program.
IBM is currently working on themes that have been identified as most important for the scheduling market and for its customers and invites you to join the process. By joining the CCP, you can start the discussion with the lab on the themes and associated requirements that are most important to your organization.
The CCP is a complete but flexible interaction program, with design presentations, early development demos, usability tests and beta program. The CCP's interactive program is supported by the Collaboration Zone web site, where you will be able to get an up-to-date vision of next release content (architecture design, monthly demos, etc.) and give your feedback early on features implementation and business scenarios through forums and through a regular series of web meetings.
Join the new TWA CCP program today! Please contact firstname.lastname@example.org.
During last week’s reporting web conference, one question that was asked was how you could
configure your favorite reports to display directly on the Start Center. These might be reports you frequently access, or that you
want your users to focus on as they contain critical information.
One way you can do this is by using the Report List
Portlet. Within this portlet, you can select
any number and types of reports to display on the Start Center,
including detail, ad hoc and reports enabled thru report integrations.
Additionally, you could create multiple report list portlets
on a start center page, with each containing groupings of related reports by
application, location or site.
To enable the report list portlet, you must first grant
access to the Report List Setup Application in the Security Group Application.
Another option you have is to set up your KPIs on your start
center to link to related reports.
KPI is displayed in yellow or red status, your user can then click on the
related report to find out what is causing the issue so he can take immediate
Enabling your users to directly access reports on the Start Center via t the Report List Portlet or KPIs is a great way to save them multiple mouse clicks and time.
For additional information on setting up the Report List Portlet or on
configuring KPIs for report drilldown,, please reference this page
On Tuesday, September 20th, I had an amazing
opportunity to host a web conference for our Maximo and Version 7 clients. This session shared the latest information on Business Intelligence and Reporting, but most importantly - was opened up for over 60 minutes to give you - our clients - the opportunity to ask questions.
With representatives from over 15 companies, you asked tremendous, detailed questions on reporting strategy and functionality. Your questions led others to ask questions - so you quickly learned information and best practices from each other.
missed the session, the questions clients asked are detailed here
can also locate this page by scrolling down to the FAQ section of the Report
Wiki Page located here
Please comment back if you would like similar web conferences in the future
- or if you have any other reporting questions. Thanks!
QBE (Query by Example), also known as Data Download, is an
excellent way for you to quickly take the results of your application query
and/or filters and export them to Microsoft Excel. You can then view each of the list tab
fields, and perform additional analysis, combine the data with other queries,
print, or save.
However, often your users want to see additional fields in
the exported data. And the fields that
one user may want to see are not the same as fields another user may want to
see. You can modify the list of fields
that displays with the Application Designer.
However, this involves resources, planning, and space limitations.
You can provide your users the ability to
application filter and/or query
their unique fields or database attributes
Define sorting, grouping, filters and parameters
Download to Microsoft Excel
Application Designer Modifications
thru QBR (Query Based Reporting). QBR is available from essentially any
application, and provides powerful, flexible ad hoc reporting capability as
shown in the sample below.
With QBR report creation process, you can quickly export your
report results directly to Microsoft Excel with a few simple clicks. Additionally, you can choose whether to execute
your report a single time – and discard it – or to save your QBR report and
execute it regularly in the future. You
can also schedule the QBR so the results appear in your email inbox – along
with others - on a preconfigured time period.
You can find more details on the QBR functionality, including more details on its use and how to extend for your unique environment, at this url.
Additional detailed information, including screenshots, can be found in the reference materials referenced in this url
the Version 7 product lines, there are numerous ways you can analyze
your data using the Business Intelligence tools. Each analysis mechanism has its own set of unique attributes, and can be used in a variety of situations.
As an IBMer immersed daily in a
sea of acronyms, I've listed these data options for you here with their identifying acronym - - in order of increasing data analysis capability.
– Query By Example. Often referred to as Data Download, this functionality uses your application’s filter and/or query, you can
immediately download your results for additional analysis in Microsoft Excel.
– Result Sets. Using an application’s query, enable a set of fields
or graphic for display on the Start
– Application Exporting. Extends the QBE functionality, by enabling
fields from multiple database objects to be exported to a various file formats
thru the use of Object Structures.
- Key Performance Indicators. Visual indicators displaying status against
– Query Based Reporting. This is V7's terminology for Ad hoc reporting where
users create their own reports on the fly from within the various applications.
– Report Object Structures. Collection of joined database tables, forming
the backbone of QBR and metadata packages.
– Operational Report. Often referred to as transactional reporting,
these are the day to day detail reports users require to complete their
– Strategic Report. Enable viewing of data in varying perspectives
thru the use of complex graphs, in depth calculations or scenarios.
option or options are best for your unique business needs? And once
you’ve selected that option, how do you maximize its use and enable its
interaction with the other tools?
of these options is discussed in detail in the Report Upgrade planning guides
available at this site
stay tuned to these BiLOG (Business
intelligence Blogs) entries as we focus on these questions and more.
Welcome to Useful Tiny Little Things, a
series of topics that I will publish in the Process Automation blog. My
name is Leandro Cassa and I work at IBM as the CCMDB Level 3 support
team leader. The purpose of this series is to provide useful simple
things that we sometimes have no idea exist. These tips apply to CCMDB,
Tivoli Asset Management for IT, Service Request Manager, and other
products based on Tivoli's process automation engine.
This week we'll get deeper into the Launch In Context feature. Launch In Context, as the name states, enables you to launch an external system (URL) in a given context. In this case the context is our Tivoli Process Automation Engine Application context. For example, if you are using the Organization application you could provide an external system data from your actual context, for example field values.
It is a very useful feature. Change and Configuration Management Database (a Tivoli Process Automation Engine-based product) uses it out of the box to provide visual integration with Tivoli Application Dependency And Discovery Manager (TADDM).
If you have CCMDB installed and Launch In Context was setup during or post install you can check it out. Click Go To > IT Infrastructure > Actual CI, select a record and click on the select action > View Actual CI Topology > Business Application.
Then you get the following screen:
I wrote a wiki paper that describes how to use Launch In Context. It describes an example implementation using Google Maps. If you like the idea, refer to my paper post right here
The result is very nice. If you are at the People Application and the person record you selected has an address and site with Billing and Ship address you get something similar to the following:
In this example, I pin addresses on a static map. Of course, the purpose of the paper is to introduce the concepts to you and walk you through a simple example, to enable you to do much more.
That's it for this week. Hope this helps.
TSAM Extension for Workload Automation (http://www.ibm.com/software/ismlibrary?NavCode=1TW10WS12
) provides a solution to integrate Tivoli® Service Automation Manager with Tivoli Workload Automation.
- Tivoli Service Automation Manager assists you in the automated provisioning, management, and deprovisioning of IT landscapes, comprised of hardware servers, networks, operating systems, middleware, and application-level software. You can deploy, monitor, and manage cloud computing services. It also provides traceable approvals and processes.
- Tivoli Workload Automation plans, monitors, and controls the flow of work through your enterprise's entire operation on both local and remote systems. From a single point of control, Tivoli Workload Automation analyzes the status of the production work and drives the processing of the workload according to installation business policies. It supports a multiple-end-user environment, enabling distributed processing and control across sites and departments within your enterprise.
A business scenario
The purpose of the following scenario is to show how TSAM Extension for Workload Automation can improve the business of a financial enterprise.
The Alpha Credit
financial company has recently acquired another company, the Omega Bank
. The company business is spread across multiple activities in a complex environment comprising a high number of workstations, located in different sites or in different organizational units. The new company needs to calculate bank interest rates on accounts every month. A dedicated system to run this critical task is required only few days a month. They need to optimize systems utilization to reduce power consumption and maintenance duty costs. A smarter and green planet campaign is in place in the company. Since before the acquisition, Omega uses the TSAM Extension for Workload Automation to provision ready-to-use scheduling environments with just a few clicks. The operator in charge of creating the environment creates a project using the Tivoli Service Automation Manager user interface, and submits a service request to create a pool with name INTERESTRATES_US. Based on the requirements in the service request, Tivoli Service Automation Manager creates a pool of dynamic agents in the specified Tivoli Workload Scheduler environment and installs the software necessary to process the interest rates. The resulting environment is ready to use and no further intervention is required. To perform this task, the operator does not need to know the following background information:
- Hardware infrastructure and architecture
- Installation prerequisites and procedures for Tivoli Workload Scheduler dynamic agents
- Configuration details about the Tivoli Workload Scheduler master domain manager. This information is provided by the Tivoli Workload Scheduler administrator using a properties file
After the acquisition, the number of accounts to be processed increases dramatically by three times and all the jobs scheduled on the INTERESTRATES_US pool fail because suddenly the number of assigned resources is no longer sufficient. When the jobs fail, the operator receives an e-mail notification explaining the reason of the failure. The operator can easily resolve the problem by allocating more resources in the INTERESTRATES_US project using the Tivoli Service Automation Manager user interface. As soon as the resources are allocated, the operator restarts the jobs in error.
TSAM Extension for Workload Automation provides on demand provisioning and deprovisioning of workload-automation ready environments. The Extension creates pools and deploys dynamic agents when the need for new Tivoli Workload Scheduler workstations arises. The benefits that can be gained from using TSAM Extension for Workload Automation are the following:
- No prerequisite knowledge of Tivoli Workload Automation is required. All necessary information is available in the validated template.
- Checks on prerequisites are performed only once when defining the template to be used in future installations. The template is reused in all subsequent installations.
- Agents are easily deployed and installed by the operator, without intervention by the system administrator.
- Workstations are deployed on demand, on a just-in-time basis when new resources are required and can be deleted when they are no longer necessary.
- All new workstations are automatically installed, configured, and added to a pool, so that they are ready for scheduling, without having to wait for the next production day.
- The pool owner, defined when creating the pool, is automatically granted administrator privileges on the pool.
Each SQL query presented to the DBMS must be processed before data rows can be retrieved, updated, or inserted.
Each SQL statement will be parsed and checked whether it is syntactically correct. In addition to parsing the query, the database optimizer will make decisions about what is the best access plan to process the SQL statement, using the statistics collected by the DBMS. Both parsing and optimizing the statements would introduce a certain amount of overhead to the overall processing time of the query. If a query needs to be executed many times, the overhead could be very significant.
DB2 9.7 introduces a feature called statement concentrator that IBM Tivoli service management products users can benefit from. The statement concentrator modifies the dynamic SQL statements that DB2 receives and converts the literals in the statement into parameter markers. As a result, SQL statements with different literals will be presented to the optimizer as the same SQL statement.
After being processed by the statement concentrator, an IBM Tivoli service management products SQL statement will be like the following:
Statement text = select * from asset where (assetnum = :L0 and siteid = :L1 ) fetch first 1000 rows only optimize for 1000 rows for read only
In this query, the literals originally in the query were replaced by a more generic representation such as ":L0" and ":L1". In the snapshot, you can also see that the statement was executed 17592 times, but it only compiled once.
Number of executions = 17592
Number of compilations = 1
Worst preparation time (ms) = 3
Best preparation time (ms) = 3
You can use the following DB2 commands to enable and disable statement concentrator:
· Enable: UPDATE DB CFG USING STMT_CONC LITERALS IMMEDIATE;
· Disable: UPDATE DB CFG USING STMT_CONC OFF IMMEDIATE;
The advantages of the statement concentrator are the following: First, it will reduce the overall processing time of the query. The optimizer will store the access plan of the first query in the SQL statement cache (or package cache). For subsequent queries, the access plan will be retrieved from the statement cache, and the optimizer does not have to re-calculate the access plan. Second, the statement concentrator can help to reduce the CPU processing overhead since the SQL statement optimization is a CPU intensive task, especially for complex queries. Our lab results show that IBM Tivoli service management products can benefit greatly from DB2 statement concentrator.
One thing to keep in mind is that the statement concentrator might change the access plan for the existing queries. We discovered that if there are queries which are not fully optimized, the performance of these queries might be worse with statement concentrator. So the users would need to re-optimize these statements, possibly with additional indexes.
Hi Community ,
Please have a look at the following presentations: Tivoli Workload Automation 8.6 Presentations
These presentations will enable you to understand the value of this new release and how it can help your business.
Tivoli Workload Automation Development
have questions on the future reporting direction of Maximo? Have you heard about V7RI for Maximo 6 but don't know what it is? Are you having problems locating the listing of delivered reports in the Version 7 releases? Is it
true you can edit a QBR report in the 7.5 releases?
To get these and other Maximo reporting questions answered, please join Pam Denny on Tuesday, September 20th at 10am EST for 90 minutes of a live, interactive session on Maximo reporting.
For more information, see Pam's post on the Asset Management blog.
TWA 8.6 Feature:
"Support for cross dependencies among jobs running on different scheduling engines"
See the link in which you can find a detailed whitepaper on this feature!
This feature enables Scheduling team to integrate workload running on different engines, which can be a mix of Tivoli Workload Scheduler for z/OS engines (Controller) and Tivoli Workload Scheduler engines (Master Domain Manager and Backup Master Domain Manager).
Scheduling business comprises multiple activities; some of which runs at different sites or involve different organizational units; others require different skills to be run. For these reasons users must keep your scheduling environments separate.
Nevertheless, even if most of the batch workload is managed locally, none of these environments is completely isolated from the others, because they frequently need to interoperate to exchange or to synchronize on data and activities.
Scheduling users need is to have the capability to federate your different heterogeneous scheduling environments in an easy way so that users could:
- Define in one scheduling environment dependencies on batch activities that are managed by another scheduling environment,
- Control the status of these dependencies by navigating from a unique user interface across the different scheduling environments
A cross dependency is, from a logical point of view, a dependency of a local job on a job instance that is scheduled to run on a remote engine plan. To implement a cross dependency, you need to define the following objects:
Remote engine workstation
A new type of workstation that represents locally a remote Tivoli Workload Scheduler engine, either distributed or z/OS. This type of workstation uses a connection based on HTTP or HTTPS protocol to allow the local environment to communicate with the remote environment.
A job scheduled to run on a remote Tivoli Workload Scheduler engine.
A job defined locally, on a remote engine workstation, which is used to map a remote job. The shadow job definition contains all the information necessary to correctly match, in the remote engine plan, the remote job instance.
Detailed documentation can be found at: