Purpose of Document
The purpose of this document is to provide an overview of the IBM Business Intelligence Pattern with BLU Acceleration and its moving parts.
IBM Business Intelligence Pattern with BLU Acceleration Summary
Keeping with the cookbook document theme, a pattern is in many ways the restaurant version of software. It’s like taking someone else’s recipe for making something great, trying to get the same ingredients, trying to follow the instructions, goofing up somewhere along the way only to be left with something expensive and just somehow wrong or charred and tasteless! This happens with software implementations too and just as we’ve accepted that some dishes are better made by someone else, sometimes its better to let someone else put your software together too - after all most of us don’t build our own cars and amateur dentistry is certainly not popular.
There are lots of ways to put together components of enterprise software in pursuit of the solution to a problem, and let’s face it, that’s why we do it - it’s not because it’s glamorous or makes our kids think we’re cool. The scope of IBM products and operations provide us with some unique insights into the variety of ways that clients use our products to solve their business problems and over the years we’ve noted that there are many common “patterns” of use. As we’ve moved to the cloud and spent increasing amounts of time in growth markets, we’ve seen an emerging trend to “skip the stuff in the middle” - clients increasingly view many of the parts of the software stack as details that are fundamentally less important than the goal of solving the business problem. This is where patterns of expertise come in.
A pattern of expertise is what then? It’s a piece of software that wraps all the components you need into an integrated, interoperable unit that is designed to provide you with the core functionality you need to solve a problem. The scope of the problem addressed will vary by pattern but all patterns will have the following in common:
- they will accelerate time to value
- they will simplify deployment of complex software
- they will enable you to better leverage virtualization in your environment
- they will do so in a repeatable manner
Enter the IBM Business Intelligence Pattern with BLU Acceleration. The reason why this solution exists is to provide clients with a fast, proven, predictable way to stand up a complete Business Intelligence system with in-memory data acceleration. The pattern deploys the best of IBM Cognos Business Intelligence with Dynamic Cubes integrated with BLU Acceleration for fast on fast performance with fast time to value. One person and one hour is all it takes to get access to a data connection where you can begin loading your data and a completely functioning BI environment where you can start deploying projects, modelling or whatever you want to do. So let’s talk about what it’s for.
Data volume, query complexity and high user concurrency are challenges than any organization with a high awareness of the value of information faces. Everyone has lots of data, terabytes and terabytes of it, and now a lot of business users know you have it and want to use it. The questions an organization needs to focus on become the following:
- How do you get correct, timely data to the right people?
- How do you scale your organization to provide access to appropriate tools to rapidly moving groups of advanced consumers?
- How do you ensure that this additional, volatile demand on your data warehouse does not impact the myriad of core business applications that rely on stability and performance?
Answers to these questions will vary. They will depend upon your IT resources and budgets, they will depend upon your ability to embrace change and new technology and they will depend on where you are in the business analytics journey. Some clients will begin with limited information infrastructure and need to get started with business intelligence and a data mart. This is a great opportunity to leap to the front of the technology pack and embrace the latest in in-memory technology, enterprise BI, and expert integrated systems to minimize your labour costs, reduce your implementation risk, and increase your chances of success. Some clients will begin with significant infrastructure and a variety of business problems to solve. Some of these business problems are:
- Timeframe to upgrade to latest versions for new functionality
- Groups of disruptive users circumventing processes to extract data and causing performance issues in the warehouse
- Applications with customized requirements that are incompatible with data governance in the warehouse
- Interactive exploration applications that are failing to deliver performance in other Business Intelligence tools due to large dimensions and fact data
- Existing MOLAP infrastructures that are no longer able to address data volume requirements
These cases exemplify the problems that the IBM Business Intelligence Pattern with BLU Acceleration was built to solve.
IBM Business Intelligence Pattern with BLU Acceleration Architecture
The IBM Business Intelligence Pattern with BLU Acceleration solution consists of four virtual AIX servers which host all of the components necessary for an IBM Cognos BI deployment. The IBM Cognos Business Intelligence (BI) deployment consists of a two IBM Cognos Content Managers, a single IBM Cognos Report Server and an Elastic Load Balancer (ELB) which directs requests directly to the dispatcher. An IBM DB2 database is used as the IBM Cognos content store. The IBM DB2 BLU instance which is used as the query data source is housed on another virtual server.
ELB shared service
The IBM Business Intelligence pattern with BLU Acceleration solution uses the ELB proxy shared service as the entry point to the IBM Cognos BI portal. This proxy replaces the need for a web server and sends requests directly to the IBM Cognos BI dispatcher. The ELB is not part of the IBM Business Intelligence Pattern and must be deployed into the cloud group which will be used by the IBM Business Intelligence Pattern with BLU Acceleration solution.
In order to configure a security provider to be used by the IBM Cognos Content Manager instances, the security provider must exist and contain the users that are to be used to secure the IBM Cognos BI business content. The IBM Business Intelligence Pattern with BLU Acceleration solution is made aware of the external security provider through the addition of a user registry component to the pattern canvas. The actual IBM Cognos Content Manager Security provider is then configured via the available properties of the user registry component.
The IBM Business Intelligence pattern with BLU Acceleration
The IBM Business Intelligence pattern with BLU Acceleration consists of four virtual AIX servers. The virtual server names are generated by taking the virtual pattern name and appending a suffix to uniquely identify the virtual server. The following subsections provide an overview of each virtual server, its name and its function.
The <Hostname>COGP virtual server hosts the IBM Cognos BI dispatcher within a Tomcat application server on an AIX operating system. The dispatcher is used to host the IBM Cognos Dynamic cube and service any interactive or batched report requests.
The <Hostname>CM1 virtual server hosts the primary IBM Cognos Content Manager within a Tomcat application server on an AIX operating system. The IBM Cognos Content Manager is responsible for communicating with the IBM Cognos DB2 content store, also located on this virtual server.
The <Hostname>CM2 virtual server hosts the stand-by IBM Cognos Content Manager within a Tomcat application server on an AIX operating system. The stand-by IBM Cognos Content Manager provides the ability to failover the primary IBM Cognos Content Manager service.
The Hostname>Primary virtual server hosts the IBM DB2 BLU data warehouse which is to be used to populate the IBM Business Intelligence Pattern with BLU Acceleration solution’s IBM Cognos Dynamic Cube data source.
The IBM Cognos Dynamic Query Analyzer and the IBM Cognos Cube Designer client tools are available as a download from the IBM PureApplication System pattern instance page. Clicking on either of these client tools will download the Windows installation program that can be used to install the client tools on a local workstation.
IBM Cognos Dynamic Query Analyzer
The IBM Cognos Dynamic Query Analyzer (DQA) is used to graphically display the execution tree log generated by the QueryService when running IBM Cognos BI business content.
IBM Cognos Cube Designer
The IBM Cognos Cube Designer is used to model the IBM Cognos Dynamic Cube used as the IBM Business Intelligence Pattern with BLU Acceleration solution’s data source. For more information on IBM Cognos Dynamic Cube modelling, see the IBM Cognos Dynamic Cubes Redbook referenced in the Resources section at the end of this document.
IBM Business Intelligence Pattern with BLU Acceleration Life Cycle
The IBM Business Intelligence Pattern with BLU Acceleration consists of an IBM Business Intelligence pattern which deploys a complete IBM Cognos BI environment and data warehouse. This data warehouse must be populated using user defined Extraction and Translation Language (ETL) scripts. Once the query data warehouse has been populated, an IBM Cognos Dynamic Cube must be modelled and published into the IBM Cognos BI environment as a reporting data source. This data source is then used to create the desired business content in the form of IBM Cognos reports.
IBM Business Intelligence with BLU Acceleration Pattern deployment
Before the query database can be populated with data, the required shared services and the IBM Cognos Business Intelligence pattern with BLU Acceleration needs to be deployed. For further details, see the IBM Business Intelligence Pattern with BLU Acceleration Installation and Administration.
Data movement into the IBM DB2 BLU Warehouse
Once the required shared services and the IBM Cognos Business Intelligence Pattern has been deployed, the IBM DB2 BLU data warehouse, which will be used to populate the solution’s IBM Cognos Dynamic Cube data source, needs to be populated. The following sub-sections cover the two ETL scenarios of customers who have an existing data warehouse and ETL infrastructure and customers who are starting from scratch.
Data movement for Environments with an existing ETL infrastructure
In this scenario, the environment has an existing star schema and ETL process. Before the data is transferred to the IBM DB2 BLU data warehouse, the following should be considered.
- Moving the existing data warehouse data without any changes
This approach requires that the existing data is copied into the IBM DB2 BLU data warehouse periodically. ETL jobs can be developed to load the data into the IBM DB2 BLU data warehouse for initial load. If change data capture (CDC) fields are in place for the fact and dimension tables within an existing data warehouse, then incremental loads can be run to update the IBM DB2 BLU warehouse on a periodic basis. Should the CDC field not be in place, then the IBM DB2 BLU warehouse would need to be truncated and re-loaded every time the data is updated.
- The existing data warehouse is row based
If the existing data warehouse is row based, it will need to be converted into a column based warehouse. See the database vendor specific information on how to perform this task.
- Moving the existing data warehouse with changes
In this scenario, additional columns are to be added to an existing data warehouse as part of the move to the IBM DB2 BLU data warehouse. This requirement would result in edits to the existing ETL scripts. Also customer would need to evaluate if a complete re-load is required after adding the new columns. In most cases, doing a full re-load would be quicker than performing updates.
Data movement for customers starting from scratch
In this scenario, a new star schema is created within the IBM DB2 BLU warehouse using warehousing best practices. Once the star schema structures are created within the IBM DB2 BLU data warehouse, ETL jobs need to be developed for initial and incremental loads to populate atomic facts and dimensions. For additional information on the ETL, please see the Data Preparation chapter of the IBM Business Intelligence Pattern with BLU Acceleration Installation and Administration Guide.
The IBM Cognos Dynamic Cube Data Source
The IBM Cognos Dynamic Cube is the IBM Business Intelligence Pattern with BLU Acceleration solution’s sole data source, which is used to satisfy all the business content. The performance of any of this content and interactive analysis is highly dependent on how well model is designed. General principles of having single join between fact and dimension, minimizing number of data type conversion cases, minimizing number of snowflake dimensions always apply. There are, however, some other tips and techniques that can help design a well optimized and highly performing model.
The following sections cover some IBM Cognos Dynamic Cube design considerations. This information is in addition to any information found within the IBM Cognos Dynamic Cubes Redbook (see the Resources section at the end of this document).
It is important to understand different types of measures that can be created in the IBM Cognos Cube Designer and under what scenarios certain type of measures should be used. Incorrectly used measures can result in significant degradation of report performance. The IBM Cognos Cube Designer supports two types of measures - relational and dimensional.
Relationally created measures are pushed down to the database, the IBM DB2 BLU database is designed to process and summarize large volumes of data. Relational measures should be created under the following conditions:
- The measure can be directly mapped to the source column.
- The measure can be calculated using relational style functions using the columns that can be directly mapped from source fact table where the measure is either calculated using other measures that are directly mapped to a source OR the measure is calculated using combination of other measures and dimensional foreign keys that are stored in a fact table.
Dimensional measures, on other hand, are designed to calculate the measure value on top of the data pulled out and summarized by a relational measure. Relational measures can only use relational style syntax for creating expression and dimensional calculated measures can only use Multi Dimensional Expression (MDX) style functions and expressions. If a dimensional measure is created using relational style functions, it will be syntactically correct in a model but will be automatically removed from the IBM Cognos Dynamic Cube on publishing. Dimensional measures should be created under the following conditions:
- The measure requires multiple path aggregation and is derived from aggregation of other relational measures. For example, Turnover Rate = Count of Employees who left organization / Count of all Employees.
- The measure is calculated at a certain dimensional level and uses some hierarchy levels. For example, Deal Size is calculated at the order number level requiring all order lines to be summed up to an order level. Typically, this type of calculation will be created using the “within set” MDX function referencing a certain hierarchy level.
Table 1 below summarizes the aforementioned factors that should be considered when deciding between relational and dimensional measures.
Table 1 - Relational and dimensional measure overview
|Relational Measure||Dimensional Measure|
|Uses relational functions and expressions only||Uses dimensional functions and expressions only|
|Query is pushed down to the database or derived from in-memory aggregate table||Query is processed locally using base relational measures|
|Used on atomic data||Used on summarized data|
|Considered by Aggregate Advisor for inclusion in aggregate tables and in-memory aggregates||Is not considered by Advisor for aggregate tables and in-memory aggregates|
|Cannot use hierarchy level in calculation||Can use hierarchy level in calculation|
|Can be used in virtual cube||Cannot be used in virtual cube|
The IBM Cognos Cube Designer does not have a concept of shortcut aliases as they exist within the IBM Cognos Framework Manager. Each role playing dimension should be modelled as separate dimension within IBM Cognos Cube Designer.
User Ranges Dimensions
A common IBM Cognos Business Intelligence reporting requirement is to allow a report consumer to view measure values bucketed by measure ranges. Some common examples include showing revenue by deal size buckets or viewing employee income by different length of service groups.
This requirement can be implemented within IBM Cognos Cube Designer by creating a static dimension of measure buckets which consist of the bucket name, low range value and high range value. This static dimension can then be joined to the fact table using two relationships. One relationship where the measure from the fact table is less than the high range from the bucket dimension and another relationship where the same measure is greater than or equal to the low range from bucket dimension.
Use of aggregates is critical to the optimal operation of an IBM Cognos Dynamic Cubes data sources. Aggregates can be generated based on workload and model analysis. For fact data smaller than 1 Terabyte (TB), the aggregates should be based on both model analysis and a representative workload. For fact data larger than 1 TB the aggregates should be generated on representative workload only.
In-memory and external (in database) aggregates complement each other to achieve optimal cube performance. Both types of aggregates should be used. Therefore the cube property Disable external aggregates should be set to false.
IBM Cognos Business Intelligence Pattern with BLU Acceleration Logging
The IBM PureSystems console provides easy access, searching and archiving for all the logs in the deployment. The following section provides an overview of the following:
- Log File Organization
- Log File Content Search
- Downloading the Log File to the Local Client
Log File Organization
The log viewer utility organizes the logs by virtual machine. In Figure 1 below, the log viewer lists all the virtual machines which make up the solution.
Figure 1 - IBM PureSystem log viewer
Each listed virtual machine further organizes the logs by application and any application specific logic. Figure 2 illustrates the expanded view of the two virtual machines, underneath each virtual machine, a list of functional areas are listed.
Figure 2 - Log viewer displaying functional areas of each virtual server
In most cases, the functional area, actually maps to a physical folder on the virtual machine. In Figure 3 the functional area named DB2 has been expanded to display the IBM DB2 log files and their physical location on the operating system.
Figure 3 - Log viewer highlighting the actual location of the IBM DB2 file db2diag.log
Log File Content Search
Once a log file is selected, it displays within the right pane. The file can be searched by clicking on the magnifying glass icon. After clicking the magnifying glass, the search dialog box similar to the one in Figure 4 will appear and allow a user to enter the desired search criteria.
Figure 4 - Log viewer search dialog box
Search results are visible within the large pane and the individual line item which matches the criteria is highlighted. Figure 5 illustrates the match highlight on the search criteria “aggregate”.
Figure 5 - Log viewer search results
Additionally, the end of the file can be monitored in real time by clicking on the Monitor end of log file button located on the toolbar. This button will scroll to the last line in the log, as the file grows it will continue scrolling to the bottom.
Downloading the Log File to the Local Client
Occasionally, it may be necessary to download one or more log files to the local client machine. The IWD console makes it easy to download one file at a time, or all log files across all virtual machines.
To save the current log file, click on the Save icon in the toolbar (Figure 6) and provide a local folder location when prompted.
Figure 6 - Toolbar save icon
To save all log files, click on the Download All link in the left pane (Figure 7).
Figure 7 Toolbar Download All icon
The Download All button will save four archives, each one corresponding to one of the virtual machines in the pattern. The archive names will correspond to the virtual machine names as displayed in the log viewer.
IBM Cognos Business Intelligence Pattern with BLU Acceleration Troubleshooting
To determine how much free space in gigabytes you have, from an AIX command line, use the command
df –g. Any IBM Cognos products will always be installed on /opt1 as shown in Figure 8 below.
Figure 8 - Output from the df-g command
IBM Business Intelligence Pattern with BLU Acceleration
To troubleshoot issues that might be encountered with virtual application pattern plug-ins and deployments, refer to the following in the IBM PureSystems Information Center,
IBM DB2 BLU
The primary log file intended for use by database and system administrators is the administration notification log. The db2diag log files are intended troubleshooting purposes. Administration notification log messages are also logged to the db2diag log files using a standardized message format. The db2diag tool serves to filter and format the volume of information available in the db2diag log files. Filtering db2diag log file records can reduce the time required to locate the records needed when troubleshooting problems.
More information on these logs and the db2diag tool can be found in the IBM DB2 Information Center at,
Default QueryService Logging Level
The default logging settings may not be optimal for diagnosing performance or query issues. In order to collect additional information to optimize your environment, it may be necessary to alter the settings.
The diagnostics logging file is located on the <Hostname>COGP virtual server within the following directory:
It is important to note that only the following entries should be modified as changing other values may affect your installation in unpredictable ways. The settings indicated are applied dynamically, there is no need to restart your dynamic cube or query service for the changes to take effect. All information captured will be written to the xqelog files located in the /opt1/ibm/cognos/c10_64/logs/XQE directory.
Cache information for in-memory aggregates:
<eventGroup name="ROLAPCubes.AggregateCache" level=“info"/>
Cache information for in-database aggregates:
<eventGroup name="ROLAPCubes.AggregateStrategy" level="info"/>
Trace generated SQL for all query activity:
<eventGroup name="QueryService.SQL" level="info"/>
Enables logging of aggregate advisor:
<eventGroup name="ROLAPAggregateAdvisor" level="info"/>
Track and log ROLAP performance indicators:
<eventGroup name="ROLAPQuery" level="info"/>
<eventGroup name="ROLAPQuery.Performance" level="info"/>
The default logging level in all cases is info which gives only limited information. Acceptable levels are info, error and trace. It is recommended for optimizing to set all levels to trace. Once enough information has been collected, it is important to reset them to info, otherwise your log files will grow very large.
Also note that this is not a persistent log file - if you apply a patch or update to your query service dispatcher (COGP), it will overwrite this file and all your settings will be erased. It is recommended that you download and archive the file to your local client in order to prevent unexpected changes.
Appendix: Error Messages
The following section lists some of the error messages and their resolution.
CWZIP1962E The requested operation cannot be accomplished because the state of the resource is not compatible with the operation.
- Check that the versions of IWD and IBM PureApplication System are at the required level.
CWZKS1053E: createService cognosbi-elb/188.8.131.52_com.ibm.cognos.elb.ELBControllerServiceProvisioner error. Please ensure service is started. Look in the Kernel Services logs to see more information on the underlying error.
- Ensure that the ELB shared service is running in the cloud group to which you are trying to deploy the IBM Business Intelligence Pattern with BLU Acceleration.
- Ensure that the virtual host name that has been configured for the virtual application pattern is not already in use (each virtual application deployment must have a unique virtual host name).
“Cannot find host” or “Unknown host” messages.
- Ensure that the virtual host name entered during pattern composition can be resolved through DNS or ensure that the virtual host name entered during pattern composition has been entered in the “hosts” file.
Connectivity errors when trying to publish cube or apply in-database recommendations.
- Ensure that the gateway and dispatcher URLs are set within the IBM Cognos Configuration as follows:
- Gateway URI
- Dispatcher URI for external applications
- Gateway URI
Insufficient CPU resources for deployment.
- Check the cloud group setting for Reserve resources for availability. If set to Yes, then deploying the pattern in that cloud group will require two times the resources specified within the IBM Business Intelligence Pattern with BLU Acceleration.
- Ensure that there is enough memory in the cloud group for the deployment size specified in the virtual application pattern. Note that memory is not divided equally between all VMs in the pattern and approximately 60% of the memory specified in the virtual application pattern must be available on a single compute node.
Insufficient memory resources for deployment.
- Check the cloud group setting for Reserve resources for availability. If set to Yes, then deploying the pattern in that cloud group will require two times the resources specified in the virtual application pattern.
- Ensure that there are enough CPUs in the cloud group for the deployment size specified in the virtual application pattern. For dedicated CPU allocation, the number of CPUs as specified in the virtual application pattern must be available as physical CPUs in the cloud group. For average CPU allocation, the number of CPUs as specified in the virtual application pattern must be available as virtual CPUs in the cloud group.
- IBM Cognos Dynamic Cubes Redbook
- IBM Business Intelligence Pattern Home Page
- IBM PureSystems Center - IBM Business Intelligence Pattern
Dig deeper into Big data and analytics on developerWorks
Get samples, articles, product docs, and community resources to help build, deploy, and manage your cloud apps.
Crazy about Big data and analytics? Sign up for our monthly newsletter and the latest Big data and analytics news.
Software development in the cloud. Register today to create a project.
Evaluate IBM software and solutions, and transform challenges into opportunities.