IBM Business Analytics Proven Practices: IBM Business Intelligence Pattern with DB2 BLU Acceleration Performance Considerations

Product(s): IBM BI Pattern with DB2 BLU Acceleration

This document summarizes critical performance aspects that can be used to configure the IBM Business Intelligence Pattern with BLU Acceleration solution for optimal performance.

Share:

Dominic Roy, Performance Lead - BA Pure Systems, IBM Business Analytics Growth Initiatives, IBM

Dominic Roy is a software engineer with over 26 years of experience. During that period, he contributed to the development of enterprise level products and systems, with main focus on high performance, scalability and stability. His contributions span a wide variety of products, from Enterprise Planning (READ/WRITE cubes) to Risk Management, full text search engines, geographical information systems and the development of an advanced platform for BA. His main areas of interest are J2EE and Cloud technology.



06 January 2014

Also available in Russian

Introduction

Purpose of Document

This document summarizes critical performance aspects that can be used to configure the IBM Business Intelligence Pattern with BLU Acceleration solution for optimal performance.

Exclusions and Exceptions

The operation and functionality of IBM Cognos Dynamic Cubes, IBM DB2 10.5 with BLU Acceleration and IBM Pure Application Systems will not be described in detail.

Assumptions

This document assumes some experience with using an IBM PureApplication system, the IBM Cognos Business Intelligence application and IBM Cognos Dynamic Cubes.


IBM Cognos Dynamic Cubes

IBM Cognos Dynamic Cubes and IBM DB2 with BLU Acceleration form the backbone of the IBM Business Intelligence Pattern with BLU Acceleration offering. For a more complete discussion of IBM Cognos Dynamic Cubes, please refer to the IBM Cognos Dynamic Cubes Redbook referenced in the Resources section at the end of this document.

IBM Cognos Dynamic Cubes effectively scale up to terabytes of data, using aggregate-awareness and in-memory technology to accelerate performance. The accelerated performance comes from the IBM Cognos Dynamic Cubes ability to take advantage of both in-database and in-memory pre-computed summaries. These pre-computed summaries can improve the performance of queries by orders of magnitude, providing the type of performance required for interactive reporting and analysis.

Aggregates

Aggregates designate these pre-computed summaries that represent sub-sets of cubes, typically at the intersection of higher level hierarchy members.

Use of aggregates is critical to the optimal operation of an IBM Cognos Dynamic Cube. In-memory and external (in-database) aggregates complement each other to achieve optimal cube performance. Both types of aggregates should be used. We recommend the following approach:

  • For warehouses smaller than 1 TB, generate aggregates based on both model analysis and a representative workload.
  • For warehouses larger than 1 TB, generate aggregates based on a representative workload only.

Workload generation is a mechanism offered to record usage patterns when exercising a cube. A workload file gets generated that specifies the portions of a cube that were accessed. This file is then used as input to a cube optimization process where aggregates are identified and built.

For more information on cube optimization, please refer to the section on Cube Optimization below.

IBM Cognos Dynamic Cube Caches

IBM Cognos Dynamic Cubes high performance is in large measure due to the caching of cube data at various levels - from high level cube metadata to fine grained fact data. Best practices regarding IBM Cognos Dynamic Cubes cache settings can be found in the developerWorks article Dynamic Cubes Hardware Sizing Recommendations (see Resources section at the end of this document).

Setting the various cache sizes is performed on a per-cube basis. Therefore, these steps are outside the scope of a pattern deployment and must be performed from IBM Cognos Administration. To set a cube's properties:

  1. Launch IBM Cognos Administration.
  2. Select Status.
  3. Select System.
  4. Select the server.
  5. Select the Dispatcher hosting the cube of interest.
  6. Select QueryService.
  7. From the context menu of the target cube, select the Set Properties option.

Result Set Cache (Disk Space Requirement)

It is recommended to enable the result set cache. The result set cache retains a copy of the result of MDX queries executed against an IBM Cognos Dynamic Cube on disk in a binary format. The optimal size of the result set cache is proportional to the number of named users for a system as listed in Table 1.

Table 1: Disk Space Requirement for the Result Set Cache
Named UsersEstimated disk space requirement
<1001 GB
100 - 100010 GB
1000 - 500050 GB
>5000100 GB

Each cube has its own result set cache. To enable the result set cache:

  1. From the context menu of the target cube, select the Set Properties option.
  2. Uncheck the Disable result set cache check box.
  3. Set the Maximum amount of disk space to use for result set cache cube property.

Members Cache Size

The IBM Cognos Dynamic Cube loads all of the members of its hierarchies into memory when the cube is started. Typically, the largest hierarchies dwarf the size of all other hierarchies by orders of magnitude. For that reason, the estimated size of the member cache is,

The sum of the number of members for the two largest hierarchies multiplied by 600 bytes.

This value is used to derive both the data cache size and the aggregate cache size.

Aggregate Cache

The contents of the aggregate cache are intended to provide quick access to values computed at higher levels of aggregation in a cube. Though the amount of values can grow as a cube's overall size increases, the amount of data required in the aggregate cache is not necessarily linearly correlated to the increase in cube's size. As a result, a sliding scale is used to compute the minimum size of the aggregate cache relative to the size of the member cache (Table 2).

Table 2: Memory Requirement for the Dynamic Cubes Aggregate Cache
Named UsersAggregate cache size as % of member cache
<100100%
100 - 100085%
1000 - 500045%
>500045%

To set a cube aggregate cache size:

  1. Each cube has its own aggregate cache. From the context menu of the target cube, select the Set Properties option.
  2. Set the Maximum space for in-memory aggregates (MB) cube property.

Data Cache Size

The role of an IBM Cognos Dynamic Cube data cache is documented in the IBM Cognos Dynamic Cubes Redbook. Essentially, the data cache contains the result of queries posed by the MDX engine to a dynamic cube for data. This cache gets populated as MDX queries get served. Data is organized along dimensions/members to speed up data retrieval.

Note that each cell stored in a dynamic cube will consume 20 bytes in a 64-bit Java Virtual Machine (JVM). As a result, approximately each gigabyte of the dynamic cube's data cache will store 45-50 million cells (values).

The number of named users has an impact on the minimum data cache size. In fact, each user is likely to retrieve some data which is specific to the report they are executing, even if they are possibly running the same report (prompt values are different). The minimum data cache size can be calculated as follows,

The greater of (15% of the member cache size + user factor) OR ((28% of the member cache size + user factor) - (size of the aggregate cache))

where user factor = (# of named users * 200K).

This is how the effect of supporting multiple named users is factored in the data cache size. Note that this is a minimum requirement - allotting more memory for the data cache will improve performance as the system gets used.

Cube Optimization

Cubes should be optimized on a regular basis, especially in cases where new reports and dashboards are put in production as new content will impact the access patterns. In this case, it is recommended to update existing aggregates using a workload representative of reporting access patterns.

Typically a workload gets generated from a Development environment. It involves a representative subset of the reports and dashboards used in production and could include additional reports that exercise typical access patterns. The cube property Enable workload logging should be set to true for the sampling period (in Development only). This representative set of reports should be exercised in Development to produce a workload that could be applied in Production. Typically, a Development environment involves a more limited dataset. As new reports and dashboards are added to Production, they should also be added to the Development system’s representative workload and the workload re-evaluated to ensure the aggregates have appropriate coverage. The workload file produced in the Development environment should be migrated to the Production environment and be applied when updating aggregates.

Cache Priming Techniques

For a detailed discussion on cache priming techniques, please refer to the IBM Cognos Dynamic Cubes Redbook. This extract provides a useful overview:

Cognos Dynamic Cubes cache-priming involves running a set of reports to populate the result set cache, expression cache, and data cache to accelerate query performance for users. This technique can be used to optimize specific reports, especially if they compute lots of values or it is more efficient to cache certain information in a targeted way rather than relying on large aggregates.

The reports that are used for cache-priming can be those from a known set of dashboards and reports that a majority of users will use, or those that process large volumes of data to cache upfront for reuse by multiple users. For example, if there are a set of dashboards or reports that most users use as a starting point, these are good candidates for priming so that all users can benefit from the quick performance that cached values can provide.

Cube Start

An IBM Cognos Dynamic Cube is available for use when all dimensional data is fully loaded. For an IBM Cognos Dynamic Cube that involves roughly 10 million members, this operation may take about 15 minutes. For a precise assessment of the time required to load a cube dimensional data, a throughput of 27,000 members/second should be used.

For optimal IBM Cognos Dynamic Cube performance, in-memory aggregates must be loaded. Although the throughput for loading aggregates is highly dependent on the cube complexity, a throughput of 54,000 rows/second can be used to help assess the time required to load aggregates.


Systems Considerations

Recommended Cores

An IBM Business Intelligence Pattern with BLU Acceleration deployment requires a minimum of 32 physical cores (128 concurrent execution threads) to serve the needs of a large community of users (10000 named users and more) and deliver the performance expected for interactive reporting. That number of cores corresponds to a single compute node on a Power7-based IBM PureApplication Server. The optimal target environment for this pattern would involve two compute nodes.

Named Users

A typical deployment of the IBM Business Intelligence Pattern with BLU Acceleration should address the reporting needs of a sizeable community of users. These communities can range from: 10000 named users on one compute node to 20000 named users on two compute nodes.


IBM PureApplication Server Resource Allocation

In a production environment, where response times must be predictable, it is not recommended to use over allocation of CPU resources. Ideally the BI-BLU pattern should be deployed in a Cloud Group configured with the dedicated CPU utilization model, where there is a one to one correspondence between a physical core and a logical CPU. That way CPUs are not shared across virtual applications and a system performance is predictable, in particular under load.

For more information on this topic, please consult the developerWorks article titled Managing application runtime environments in IBM PureApplication System (see Resources section). This article provides a detailed discussion on resource utilization on an IBM PureApplication system and reviews the tradeoffs between resource utilization and resource contention in a cloud environment.

In an IBM PureApplication system, the mechanisms for sharing resources as a cloud are articulated around the following concepts:

  • Compute node: This is a set of computer hardware containing CPU and memory that has access to storage and networking.
  • IP group: This is a set of IP addresses, the ID of the VLAN they will use to communicate, and settings for how to connect to the network the VLAN is part of.
  • Cloud group: This is a collection of one or more compute nodes and one or more IP groups. It is essentially a logical computer. It physically isolates resources.
  • Environment profile: This is a policy for deploying patterns into cloud groups. It creates logical isolation of resources by allocating the resources.
  • User group: This is a list of users in the same role, a role that can use environment profiles to deploy patterns.

More detailed information on these critical concepts and how to configure them can be found in the section titled Administering cloud resources in the IBM PureApplication System Information Center.

Resources

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into Big data and analytics on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Big data and analytics
ArticleID=959160
ArticleTitle=IBM Business Analytics Proven Practices: IBM Business Intelligence Pattern with DB2 BLU Acceleration Performance Considerations
publish-date=01062014