IBM Cognos 10 BI: Components & User Interfaces
In below figure, We can
see different components and how they are fitting in. The top most layer, where
we can see Cognos Connection, Administrator, Business Insight and different
studios. They all are web-based and end-user needs not to install any client
side software if he has latest web browser installed.
Bottom layer is basically
data layer where you may have homogenous or heterogeneous database systems.
Data may be relational or multi-dimentional. On top of it, we can see three modeling
tools there - Framework Manager, Transformer and Metric Designer. All of them
are client based installation.
We’ll maintain the flow of
components from top to bottom as shown in below BI components figure.
Lets start with Welcome
Screen. From this interface you can open different interfaces as per your
Cognos Connection is the portal to IBM Cognos software. IBM Cognos Connection
provides a single access point to all corporate data available in IBM Cognos
software. Cognos Connection is -
- Customizable portal interface to all Cognos 10 content (reports, analyses,
queries, agents, metrics, and packages)
Public Folders are shared
and secured by user or group
- Launching point for different capabilities based on permissions
Language, run options, home
- Access and personalize content
Access and view content,
live or saved output versions
Open the report for editing
Personalize the view by
setting prompt values, output format, language
Schedule the report to be
run in the future or at recurring intervals
Administration is a central management interface that contains the
administrative tasks for Cognos BI. It provides easy access to the overall
management of the IBM Cognos environment and is accessible through Cognos
Connection. Cognos Administration is organized into three sections:
Use the links in this
section to monitor activities, server status, and system metrics, and change
some system settings.
Use the links in this
section to define users, groups, and roles for security purposes, configure
capabilities for the interfaces and studios, and set properties for the user
interface profiles (professional and express) that are used in Report Studio.
Use the links in this
section to set up data source connections, deploy IBM Cognos BI content from
one content store to another, create distribution and contact lists, add
printers, set styles, manage portlets and portal layout, start or stop
dispatchers and services, and change system settings.
You can also perform the
following administrative tasks:
- automating tasks
- setting up your environment and
configuring your database for multilingual reporting
- installing fonts
- setting up printers
- configuring web browsers
- allowing user access to Series 7
reports from IBM Cognos Connection
- restricting access to IBM Cognos
Aside from the typical
administrative tasks, you can also customize the appearance and functionality
of different IBM Cognos components.
- to set the initial configuration of IBM Cognos components
after you install them
- to configure IBM Cognos components if you want
to change a property value or you add components to your environment
- to start or stop the service for an IBM Cognos
component on the local computer
can run IBM Cognos Configuration in either interactive or silent mode. In
interactive mode, you use a graphical user interface to configure the IBM
Cognos component. In silent mode, the tool runs in the background and requires
no interaction from you as part of an unattended installation.
you change the value of a property, you must save the configuration and then
restart the IBM Cognos service to apply the new settings to your computer.
distributed installations, ensure that you configured all computers where you
installed Content Manager before you change default configuration settings on
other IBM Cognos computers. For example, you can
- change the default
user and password for Cognos Content Database
- change a URI
- configure IBM Cognos
components to use IBM Cognos Application Firewall
- configure temporary
- configure the
gateway to use a namespace
- Enable and Disable
- configure fonts
- configure font
support for Simplified Chinese
- change the default
font for reports
- save report output
to a file system
- change the
you change the default behavior of IBM Cognos components to better suit your
IBM Cognos environment, you can configure Portal Services, configure an
authentication provider, or test the installation Test the Installation and
1) IBM Cognos Business
IBM Cognos Business Insight is a Web-based tool
that allows you to use IBM Cognos content and external data sources to build
sophisticated interactive dashboards that provide insight and facilitate
collaborative decision making.
You can view and open
favorite dashboards and reports, manipulate the content in the dashboards, and
email your dashboards. You can also use comments and activities for
collaborative decision making and use social software such as IBM Lotus®
Connections for collaborative decision making.
Create dashboards with
Business Insight to give business users in your organization an integrated
Business Intelligence experience that includes collaborative decision making. A
dashboard allows users to quickly complete a wide variety of tasks such as
viewing and interacting with reports and collaborating and sharing information.
When you create an
interactive dashboard, you are assembling IBM Cognos content. You can also add
content from HTML and text sources.
IBM Business Insight is -
and share dashboards
Assemble content for a personal or shared dashboard
without IT intervention
Arrange elements in an intuitive, WYSIWYG interface
with information for greater understanding
Personalize the look and
feel with easy formatting options
Select alternate visualizations
Conduct further analysis in context
- Share and
collaborate on key information
Add comments to clarify and question
Search for existing content and begin authoring in
IBM Cognos Business
IBM Cognos Business
Insight Advanced provides a single, integrated environment for advanced
business users who need to do more than consume reports and dashboards that are
authored for them. This solution does not require you to use different user
interfaces depending on whether the data is dimensional or relational and
whether your primary task is report authoring or data exploration. You can use
it to author new reports on relational or dimensional data. With Business
Insight Advanced, you can create and format a wide variety of reports,
including lists, cross tabs, charts, and financial statement style reports. In
addition, you can use it for OLAP exploration and can mix exploration and authoring
activities seamlessly without switching interfaces or modes.
Insight Advanced is -
drag-and-drop content authoring interface designed for business users
or dimensional data
data layouts and visualizations
- Styles and
layout, and distribution
- Content can
be run as standalone report or incorporated into Business Insight
as starting point for further enhancement by professional authors as
IBM Cognos Report Studio
is a robust report design and authoring tool. Using IBM Cognos Report Studio,
report authors can create, edit, and distribute a wide range of professional
reports. You can author entire range of enterprise reports with relational or
dimensional data sources, and show data in lists, crosstabs, and various kinds
of charts. You can write a report once and distribute it to many users in
multiple languages and formats.
professional report authoring environment
- Create new
reports or enhance content created by business users
fine-grain control over layout formatting and presentation for production
capabilities for the professional author
Extend reports with interactive maps and prompts
Multiple logical pages for varied content
Interactive tables of contents
Create offline Active Reports
Incorporate statistical analysis
- Any report can be used as a template. You
simply create and format a report and then use it as your starting point
for all other reports, leaving the original report unchanged. A report
intended to be used as a template usually does not contain data so that
the report can be used with multiple packages. You can start from
numerous pre-defined templates, or blank report as shown below.
Your users can interact
with the reports you distribute if you add prompts or enable drill-through
access to another report, or both. By
answering prompts when a report is run, your users customize the contents of
the report to meet their information needs. One authored report can then meet
the requirements of many users. By enabling drill-through access to another
report, your users can navigate from one report to the next.
Using IBM Cognos Query
Studio, users with little or no training can quickly design, create, and save ad-hoc
queries and reports to meet reporting needs that are not covered by the
standard, professional reports created in IBM Cognos Report Studio. In Query
Studio, you can
to a data source to view data in a tree hierarchy. Expand the query subjects to
see query item details.
the data source to create reports, which you can save and reuse. You can also
create a new report by opening an existing report, changing it, and saving it
using another name.
the appearance of reports
the layout of your report. For example, you can create a chart, add a title,
specify text and border styles, or reorder columns for easy comparison.
with data in a report
filters, summaries, and calculations to compare and analyze data. Drill up and
drill down to view related information.
IBM Cognos Analysis
With Analysis Studio,
users can explore and analyze data from different dimensions of their business.
Users can also compare data to spot trends or anomalies in performance. Analysis
Studio provides access to dimensional, online analytical processing (OLAP), and
dimensionally modeled relational data sources. Analyses created in Analysis Studio
can be opened in IBM Cognos Report Studio and used to build professional
Use the interactive
drag-and-drop environment in Analysis Studio to explore and analyze data to find
answers to business questions.
Using Analysis Studio, you
find and focus on items that are important to your business
understand trends and anomalies
compare data, such as details to summaries, or actual results to budgeted
assess performance by focusing on the best or worst results
establish relative importance using calculations such as growth or rank
share your findings with others
Like IBM Cognos Series 7
PowerPlay Web, Analysis Studio helps you answer business questions quickly and
easily. Analysis Studio supports the same drill up and down behavior and
drag-and-drop control as PowerPlay Web, while addressing demands for more effective
ways to analyze large amounts of data
In Event Studio, you set
up agents to monitor your data and perform tasks when business events or
exceptional conditions occur in your data. When an event occurs, people are
alerted to take action. Agents can publish details to the portal, deliver
alerts by email, run and distribute reports based on events, and monitor the
status of events. For example, a support call from a key customer or the
cancellation of a large order might trigger an event, sending an email to the appropriate
Use Event Studio to notify
decision-makers in your organization of events as they happen, so that they can
make timely and effective decisions. You create agents that monitor your
organization’s data to detect occurrences of business events. An event is a
situation that can affect the success of your business. An event is identified
when specific items in your data achieve significant values. Specify the event
condition, or a change in data, that is important to you. When an agent detects
an event, it can perform tasks, such as sending an e-mail, adding information
to the portal, and running reports.
The IBM Cognos Platform
includes a new service to support enhanced event management functionality
called the Human Task Service. This service is based upon an open
specification called WS-Human Tasks. IBM Cognos BI includes the following types
of human tasks that you can see in the task inbox:
- Approval requests
- Ad-hoc tasks
- Notification requests
You can create tasks from
the following components:
- IBM Cognos Event Studio (notification requests
and approval requests)
- The My Inbox area of IBM Cognos Connection
(notification requests and ad-hoc tasks)
- A watch rule set up for a report (notification
IBM Cognos Metric
Studio & Designer
Use Metric Studio to
create a customized scorecarding environment to monitor and analyze metrics and
projects throughout your organization. Metric Studio helps you translate your
organization’s strategy into relevant, measurable goals that align each
employee's actions with a strategic plan.
A rich scorecarding
environment shows you quickly where your organization is successful and where
it needs improvement. Metric Studio tracks performance against targets and
indicates the current status of the business so that decision makers at every
level of the organization can react and plan. Use the flexibility of Metric
Studio to model metrics and their relationships based on any standard or
proprietary scorecarding and management methodology that you already use.
Metric Designer is the IBM
Cognos 8 modeling tool used to create extracts for use in IBM Cognos scorecarding
applications. Extracts are used to map and transfer information from existing
metadata sources such as Framework Manager and Impromptu Query Definition
If IBM Cognos Metric
Studio is installed and configured as part of your IBM Cognos BI environment,
you can navigate to Metric Studio content in the Content tab and add the
following Metric Studio content to a dashboard:
- Watch lists
- Metric types
- Individual metrics
When you add an individual
metric to the dashboard, historical data for the metric displays in a form of a
bar chart. For any other IBM Cognos Metric Studio content that you add, the
content displays as a list of metrics for the selected item. Each metric in the
list has a hyperlink that opens the individual metric in Metric Studio.
IBM Cognos Framework
IBM Cognos Framework
Manager is the IBM Cognos BI modeling tool for creating and managing business
related metadata for use in IBM Cognos BI analysis and reporting. Metadata is
published for use by reporting tools as a package, providing a single,
integrated business view of any number of heterogeneous data sources.
Framework Manager can rapidly
create relational and dimensional models (Dimensionally Modeled Relational)
through a guided workflow-driven modeling process, check the execution path of
the queries, define filters, and configure data multi-language support and
To enhance the business view of the model, you can use Framework Manager
model for predictable results (star schema)
model for OLAP-style queries (model dimensionally)
create one or more business views
create and apply filters
Transformer is a proven
and relatively simple tool for modeling dimensional hierarchies and levels for
Transformer is a data
modeling tool designed for use with IBM Cognos 8 version 8.3 and subsequent
releases. You use this component to create a model, a business presentation of
the information in one or more data sources. After you choose a supported
product locale (language), add dimensional metadata, specify the measures
(performance indicators), and apply custom views, you can create PowerCubes
based on this model. You can deploy these cubes to support OLAP reporting and analysis.
In a map that contains
region layers, you can now create new region layers from existing ones using Map
Manager. Each new region within the new layer is made up of one or more complete
regions from the existing region layer.
As the report author, you
can use this new feature to customize maps when the regions in the supplied maps
do not correspond to the way information is managed and reported on. For
example, your Sales
Territories may not match
the States layer. You can create a Sales Region Layer with a region such as
Northwest (Washington, Oregon,
Idaho, and so
on). As a result, you avoid having to create a new map layer in MapInfo, a task
requiring additional knowledge as well as licensing of the original MapInfo
It is not, however,
possible to use portions of one region to form a new region. For example, you can
combine the state regions of Washington, Oregon, and Idaho
to form a sales region, but you cannot use portions of any region, such as
cities, to form a new region. Use Map Manager to load an existing IBM Cognos
map file, define the new region layer, and then save the map with the new
Install IBM® Cognos® Map
Manager if you want to -
convert maps from non IBM sources
assign alternate names for map features
assign alternate languages for map features
IBM Cognos 10.1 BI Server installation & configuration steps are
shown thru snapshots for Windows XP SP3 - 32 bit environment. However
steps should not be very different for Windows 2000/2K3/2K8 on 32-bit or
Step-1) Please unzip the package and run 'issetup.exe' from 'win32' folder.
Step-2) From below shown screen you can download Installation guide which is very helpful if you are doing so first time.
Step-3) Read the license agreement carefully and accept terms.
Step-4) Select the location where you want to install it.
Select components to be installed. If you want to create content store
in DB2/Oracle/MSSQL Server then you need to select last option as shown
below. Otherwise select all options (Recommended). I am going to create
content store in IBM DB2 9.7 here.
With last screen installation is complete successfully. Select the
check box to open IBM Cognos Configuration. However you can also open it
from Start -> All Programs -> IBM Cognos 10 -> IBM Cognos Configuration.
This is the place where you can see & change the configuration for
C10 BI server. I am going to set content store in DB2 9.7. If during
installation you had selected all 4 components then you need not to
follow below steps, directly go to Step-10. If you don't have IBM DB2
installed on machine, you can download express edition of DB2 from
http://www.ibm.com/developerworks/downloads/im/udbexp/ and install it.
Its free and easy to install.
Set environment variables ( first 4) as shown below.
Step-8) Create DB as instructed below and set Db name, user id and password in Cognos Configuration as shown in snapshot.
Start-> Run Program-> type 'db2cmd' and press enter.
C:\Program Files\IBM\SQLLIB\BIN>db2 create db cm pagesize
DB20000I The CREATE
DATABASE command completed successfully.
C:\Program Files\IBM\SQLLIB\BIN>db2 connect to cm
Database server = DB2/NT 9.7.2
ID = VMANORIA
alias = CM
C:\Program Files\IBM\SQLLIB\BIN>db2 update db
configuration using applheapsz 102
DB20000I The UPDATE
DATABASE CONFIGURATION command completed successfully.
C:\Program Files\IBM\SQLLIB\BIN>db2 update db
configuration using locktimeout 24
DB20000I The UPDATE
DATABASE CONFIGURATION command completed successfully.
SQL1363W One or more
of the parameters submitted for immediate modification
were not changed dynamically. For these configuration
applications must disconnect from this database before the
C:\Program Files\IBM\SQLLIB\BIN>db2stop force
0 0 SQL1064N
DB2STOP processing was successful.
processing was successful.
0 0 SQL1063N
DB2START processing was successful.
SQL1063N DB2START processing
CREATE BUFFERPOOL BP32K IMMEDIATE SIZE 250 PAGESIZE 32 K ;
CREATE BUFFERPOOL BP4K IMMEDIATE SIZE 250 PAGESIZE 4 K ;
TEMPORARY TABLESPACE TBS32K PAGESIZE 32
K MANAGED BY AUTOMATIC STORAGE
EXTENTSIZE 16 OVERHEAD 10.5 PREFETCHSIZE 16 TRANSFERRATE 0.14 BUFFERPOOL BP32K ;
TEMPORARY TABLESPACE TBS4K1 PAGESIZE 4
K MANAGED BY AUTOMATIC STORAGE
EXTENTSIZE 16 OVERHEAD 10.5 PREFETCHSIZE 16 TRANSFERRATE 0.14 BUFFERPOOL BP4K ;
TEMPORARY TABLESPACE TBS4K2 PAGESIZE 4
K MANAGED BY AUTOMATIC STORAGE
EXTENTSIZE 16 OVERHEAD 10.5 PREFETCHSIZE 16 TRANSFERRATE 0.14 BUFFERPOOL BP4K ;
CREATE SCHEMA VMANORIA AUTHORIZATION VMANORIA;
CREATE SCHEMA ADMINISTRATOR AUTHORIZATION ADMINISTRATOR;
ON DATABASE TO USER ADMINISTRATOR;
CREATEIN,DROPIN,ALTERIN ON SCHEMA VMANORIA TO USER ADMINISTRATOR WITH
GRANT USE OF TABLESPACE TBS32K TO USER ADMINISTRATOR WITH
GRANT USE OF TABLESPACE TBS4K1 TO USER ADMINISTRATOR WITH
GRANT USE OF TABLESPACE TBS4K2 TO USER ADMINISTRATOR WITH
GRANT USE OF TABLESPACE TEMPSPACE1 TO USER ADMINISTRATOR
WITH GRANT OPTION;
GRANT USE OF TABLESPACE USERSPACE1 TO USER ADMINISTRATOR
WITH GRANT OPTION;
GRANT USE OF TABLESPACE SYSCATSPACE TO USER ADMINISTRATOR
WITH GRANT OPTION;
Step - 9) Copy DB2 drivers from 'C:\Program Files\IBM\SQLLIB\java' folder to 'C:\Program Files\IBM\Cognos\c10\webapps\p2pd\WEB-INF\lib' as shown below.
Step-10) Test the connection in IBM Cognos Configuration before we start the service. Once its OK we can start the service.
Start the Cognos service by clicking on the Play button.
You will get a “test phase warning” that the testing of
the mail server failed. No mail server
has been configured, click OK and
When the Cognos service has
started, click Close.
Now, we are ready to configure IIS server to run Cognos application. If
IIS server is not installed on your machine, please follow these steps
Open Control Panel.
Double-click Add or Remove Programs.
Click Add/Remove Windows Components.
Ensure the Internet Information Services (IIS) check box is
Highlight Internet Information Services (IIS), and then
Ensure all of the check boxes for the subcomponents are
If any of the check boxes are grayed out, highlight the
subcomponent, click Details, and then select all of the check boxes.
When you are finished, click Next>, then wait for the
configuration. Click Finish, then close
the Add or Remove Programs dialog box, and then close Control Panel.
Once its up and running, Open Control Panel. Double-click Administrative Tools. When
the Administrative Tools window comes up, double-click Internet Information Services:
In Internet Information Services, in the left pane, expand Default Web Site, right-click Default Web Site, point to New, and then click Virtual Directory.
Click Next. Under Alias,
type ibmcognos, and then click Next.
Browse to C:\Program Files\IBM
cognos\c10\webcontent, click OK, and then click Next.
Deselect the Run scripts check box, so only
Read is selected; click Next.
Right-click the ibmcognos
virtual directory folder, point to New,
and then click Virtual Directory. Click Next.
type cgi-bin, and then click Next.
Browse to C:\Program Files\IBM\cognos\c10\cgi-bin, click OK, and then click Next.:
Step-12) Once done with above steps, Open IE (I am using IE8) and open link http://localhost/ibmcognos
It would show you below screen. With this IBM Cognos 10.1 is installed & configured successfully and its ready for use.
In today’s scenarios where many ISVs and partners are willing to provide ‘Analytics as a service’, they need a Business Intelligence platform which provide multitenancy environment with auditing/logging capabilities yet easy to manage. The IBM Cognos platform provides complete auditing capabilities that enable auditing and managing system usage in multitenancy environment. The information logged by IBM Cognos Platform auditing can be used to support administrative requirements such as:
Planning down time by identifying quiet periods.
Justifying additional infrastructure requirements.
Tenant specific usage and activity tracking
Support for Pay-as-use model
Licensing conformance reporting
Identifying unused content
Multitenancy provides the capability to support multiple customers or organizations (tenants) by using a single deployment of an application, while ensuring that the users belonging to each tenant can access only the data that they are authorized to use. Such applications are called multi-tenant applications. IBM Cognos Business Intelligence (BI) provides capabilities that make it easier to administer and secure multi-tenant applications at the same time minimize the extra costs associated with these environments.
After multi-tenancy is enabled, you can record tenant activities using an audit logging database. IBM Cognos Business Intelligence provides sample audit reports that show how to use the tenancy information to monitor certain user activities. Since the tenants are now sharing the same application deployment, it is important to consider how to separate the log files for each tenant so that a tenant can only view the logging messages that were generated by its own executions. Separating out the trace files and log files by tenant helps the administrators troubleshoot issues that are specific to a certain account.
This article describes the step by step procedure to -
1) Setting up Audit reporting environment
2) Working with sample model & reports for customized auditing
Setting up Audit reporting environment
However sample audit reports that come with IBM Cognos software can be setup and used without enabling multitenancy also. We’ll set up Cognos 10.2 BI multitenancy environment here because our focus is to make tenant specific audit & log information available to them. If you are new and need help in setting up multi-tenant environment with Cognos 10.2, check out my blog -
The IBM Cognos services send information about errors and events to a local log server. Use the data contained in the default log files primarily for troubleshooting and not for tracking usage. IBM Cognos BI provides the ability to output usage information to a relational database. With the usage audit data stored in a relational data source, reporting then becomes possible.
1) Configure the audit database –
Open ‘IBM Cognos Configuration’ (Start -> All Programs -> IBM Cognos 10 (‘IBM Cognos 10 – 64’ in case of 64-bit installation). In the Explorer pane, expand Environment, right-click Logging, and then click New Resource -> Destination. Type a name (we used AuditDBCon here) and click Database as the type.
Right-click the newly created AuditDBCon database and click New Resource -> Database. In the dialog box, type the database name (COGAUDIT in our case) and using the drop-down menu, click the type of database target (DB2 in our case).
You can choose among DB2, Informix, Oracle, Microsoft SQL Server and Sybase. The auditing database like content store is populated via a JDBC connection by the Content Manager Service so ensure that the appropriate JDBC drivers are available/copied in “<C10_install>\webapps\p2pd\WEB-INF\lib” folder.
In the Explorer pane, click on COGAUDIT and type the necessary parameters, such as database host name and port number, database login credentials, and the database name, into the fields in the ‘Resource Properties’ pane.
Test the audit database connectivity by selecting COGAUDIT and clicking the Test icon from the IBM Cognos Configuration toolbar.
If successful then save the configuration and restart Cognos services from toolbar.
2) During the start phase, the configuration change is identified, which prompts the application to create the necessary tables within the configured database (Administrator schema in GS_DB database in this case). When the service starts, 21 tables are added to the audit database. To avoid name conflicts with database keywords, all tables and column names in the database have the prefix "COGIPF". If you don’t find these tables, please check cogserver.log to for errors.
3) Set Logging Levels –
There are four report validation levels and five logging levels. The following table shows the correspondence between them.
The higher you set the logging level, the more it degrades system performance. Normally, you set the level to Minimal or Basic to collect errors, or to Request to collect errors and warnings.
Report validation level
The following table indicates the details that each logging level logs.
Setting the audit levels is done through the dispatchers and services task in the administration console in IBM Cognos Connection:
From within IBM Cognos Connection, click Launch -> IBM Cognos Administration to launch the IBM Cognos administration console.
Click the Configuration tab, and then click Dispatchers and Services.
On the Configuration pane of the Dispatchers and Services window, click the Set properties - Configuration icon on the main toolbar.
4. When presented with the Set Properties dialog box, click the Settings tab.
5. Filter the displayed settings to show only settings related to logging by clicking the Category drop-down menu, and then clicking Logging.
6. From the Value menu, set the auditing level for each of the services that make up the IBM Cognos BI environment. If you want to create audit reports that include the queries that are run against your reporting data source, you must enable native query logging. You can use native query logging to learn what kinds of information users want or whether a report is running efficiently.
7. After the all 33 levels have been specified for the desired services, click OK to save the new parameter values.
Create data source connections and import audit reports
The database used to record audit information for IBM Cognos BI (GS_DB in our case) can also be used as a reporting data source for system administrators. IBM Cognos BI can be used to create reports to show information from the audit database and provide insight into what is happening on the entire IBM Cognos Platform. IBM provides sample reports to be used for various auditing scenarios. Given that the audit information for IBM Cognos BI is stored in a relational database, administrators can also use SQL queries to get a detailed view of system activities.
1) Create a data source connection to the logging database from Cognos Administrator -> Configuration Tab -> Datasource Connections -> New Data Source. The logging database and data source in IBM Cognos Connection must be named ‘Audit’.
2) If you are using the default application server (Tomcat) that is provided with IBM Cognos BI, then in a text editor, open the web.xml file located at c10_location\webapps\p2pd\WEB-INF, and add the following XML fragment:
Note that the url-pattern value can be anything you choose.
3) If you are using an application server other than Tomcat, or if Content Manager and Application Tier Components are installed in separate locations, add the XML fragment from step 1 to the following files:
4) If you do not have the following directory on your system, create it:
5) Copy the file build.bat for Microsoft Windows operating system or build.sh for UNIX operating system located in c10_location\webapps\Audit to c10_location\webapps\p2pd\WEB-INF\classes\com\cognos\demo.
6) Edit the build file to ensure the JAVA_HOME definition points to your JDK and ensure the CRN_HOME definition points to your IBM Cognos location.
7) If it is not already there, copy the DSServlet.java file from the c10_location\webapps\Audit directory to c10_location\webapps\p2pd\WEB-INF\classes\com\cognos\demo.
Do one of the following in the DSServlet.java file:
· If you are allowing anonymous logon, comment out the following line: binding.logon(...)
· If you are not allowing anonymous logon, make sure that the username, password, and namespace are correct and uncomment the following line: binding.logon(...)
At a command prompt, run build.bat or build.sh from c10_location\webapps\p2pd\WEB-INF\classes\com\cognos\demo to compile the Java source file into the class file.
8) Restart IBM Cognos services. If you are using an application server other than Tomcat, rebuild the application file and then redeploy IBM Cognos BI to the application server.
9) Create a data source connection named url_xml to the XML data source. In the Connection string field, enter the connection string. If you used the defaults, the connection string is http://localhost:9300/p2pd/cognos/DSServlet.jsp. Click OK.
10) Before you can use them, you must set up the sample audit reports. The default location is c10_location/webcontent/samples/content/IBM_Cognos_Audit.zip. Copy the file to c10_location/deployment, and then import the sample IBM_Cognos_Audit.zip from Cognos Administrator -> Configuration Tab -> Content Administration -> New Import.
11) In IBM Cognos Connection, click Public Folders > Samples_Audit > Audit, and click the audit report that you want to run. The Multi-tenancy reports folder contains the sample reports for a multi-tenant environment. Depending on the audit report that you select, you are prompted for report criteria.
Working with sample model & reports for customized auditing
The database used to record audit information for IBM Cognos BI can also be used as a reporting data source for system administrators. IBM Cognos BI can be used to create reports to show information from the audit database and provide insight into what is happening on the entire IBM Cognos Platform. IBM provides sample reports to be used for various auditing scenarios. Given that the audit information for IBM Cognos BI is stored in a relational database, administrators can also use SQL queries to get a detailed view of system activities.
1) To design your own auditing model and reports you need to know audit tables in details. Here is the brief detail -
Stores information about operations performed on objects
Stores information about agent mail delivery
Stores information about agent activity including tasks and delivery
Stores audit information about Annotation service operations
Stores information about query runs
Stores audit information about Human Task service operations (tasks and corresponding task states)
Stores additional details about Human Task service operations (not necessarily required for every audit entry, for example, notification details and human role details)
Stores information about queries that IBM Cognos software makes to other components
Stores parameter information logged by a component
Stores information about job runs
Stores information about job step runs
Stores information about report runs
Stores information about threshold violations for system metrics
Stores user logon and logoff information
Stores information about report view requests
The COGIPF_SYSPROPS table contains a single record that indicates logging version detail. The COGIPF_MIGRATION table is reserved for an upcoming migration application, and the COGIPF_THRESHOLD_VIOLATIONS records metric threshold exception details that are derived from the IBM Cognos BI system metrics.
Logging into IBM Cognos Connection causes audit data to be written into two tables:
a. COGIPF_USERLOGON (consists of TENANTID as a column)
Details about user sessions, logons, security events, and so on can be obtained by query interactions with the COGIPF_USERLOGON table using COGIPF_SESSIONID. Detailed information about jobs and job steps can be obtained from the COGIPF_RUNJOB and COGIPF_RUNJOBSTEP tables using COGIPF_REQUESTID.
By joining the COGIPF_VIEWREPORT and COGIPF_PARAMETER tables on COGIPF_REQUESTID, additional information can be obtained, such as the package used and the format in which the report was viewed.
2) To work with audit tables you may want to build a model from scratch or use provided sample audit model to start with and change it to suit your requirements. You can get this sample model in “c10_location/webcontent/samples/models/Audit/Audit.cpf” location. Let us start with it.
Open ‘IBM Framework Manager’ (Start -> All Programs -> IBM Cognos 10). Open the project using Audit.cpf file. Notice the query subjects under ‘Audit’ namespace, two data sources url_xml & audit and a package named ‘Audit’ in Project viewer pane. In properties pane, I have set properties based on my DB2 audit database (GS_DB). You can also analyze the relationships between the query subjects and test sample scenarios here. All audit reports are based on this model. You may want to change and republish ‘audit’ package as per your reporting requirements.
Keep in mind that additional changes might cause the provided reports in the audit content package to fail when executed.
3) Here we’ll make all audit reports available to all tenants but tenants would be able to see the data specific to them not for others. Query subject COGIPF_USERLOGON had TENANT_ID as a query item. Open its definition window by double clicking on query subject. Add a new filter from ‘Filter’ tab by adding “[Audit].[COGIPF_USERLOGON].[TENANTID] = #sq($tenantID)#” as expression. Tenant ID relevant for your multi-tenant environment can be found in “Session Parameters” from Parameters tab.
From ‘Test’ tab you can test the results. Save the changes and re-publish ‘Audit’ package on Cognos connection. Now login as ‘user1’ and open report ‘Logon operation by tenant’. Notice that only tenant with ID no. 1 is available in list box. On submission all login & logoff can be seen for user1 only.
Similarly all tenants can see log and audit details for their activities and usage.
4) Sample audit reports which we imported in previous section can also be changed to suit your auditing requirements. If you have changed model as shown in previous step using Framework Manager, you can add/update existing entries in audit reports. They can also be copied, renamed and customized for tenants by using TENANTID. Here’s the detail about default sample reports -
These reports can be found under ‘Multi-tenancy reports’ folder:
Audit report name
Agent execution history by user
Lists agent execution history by user and date and time range and includes a bar chart. It also includes the total number of times each agent was executed and the total number of agents that were executed. You can select a date and time range.
Daily average and poor exceptions - all services
Shows how to monitor daily average and poor exceptions of thresholds set in IBM Cognos Administration for all services using an agent.
An email with attached report output is sent to the administrator when average and poor exceptions occur.
Daily metric exceptions
Lists daily metric exceptions for all services.
Execute reports by package and report
Lists the reports that were run, by package. It also includes the user, timestamp, and execution time in milliseconds for each report.
You can select a date and time range, one or more users, one or more packages, and one or more reports.
Execute reports by user
Lists the reports that were run, by user and by package. It also includes the timestamp and execution time in milliseconds for each report.
You can select a date and time range, one or more users, one or more packages, and one or more reports.
Execution history by user
Lists the reports that were run alphabetically, along with the package and timestamp, by user, since the logging database was created. It includes the total number of reports each user ran and the total number of times each user ran each report. It also includes the total number of reports run by all users.
You can select one or more users for the report. After you run the audit report, you can choose to view the statistics for a particular report or for all reports.
Failed report executions - by package
Lists report failure executions by package and includes a pie chart, which also shows the failed percentage of each package.
Failed service requests detect agent - all services
Detects preset thresholds for service request failures that are exceeded.
An email is sent to the administrator with service failure metrics information. The report Service requests metrics - day report is run.
Logon operations by time stamp
Shows logon and logoff timestamps and operations, by user. It also includes the total number of logons and the total number of logons for each user. You can select the time period and one or more users for the report.
Logon operations by user name
Shows logon and logoff timestamp by user, along with the type of logoff operation that occurred.
It includes the total number of logons and the total number of logons for each user. You can select one or more users for the report.
A list report shows exceptions for migration tasks.
Operations by selected object and users
Shows the operations that are performed on target objects, by user. It includes the target object path, timestamp, and the status of the operation.
You can select one or more objects, operations, or users for the report.
Report execution history (detailed report)
Lists reports alphabetically along with the associated package and the timestamp for each time the report was executed. It also shows the total number of times each report was executed and the total number of reports that were executed.
It also includes a color-coded pie chart that gives an overview of how often the reports are used.
Report execution and user logon history
This active report displays the report execution history and user logon information for a specified period of time.
Report execution history (summary report)
Lists reports alphabetically along with the timestamp for each time the report was run since the logging database was created.
Lists reports by frequency of use. For each report, it lists the user and the number of times it was run by the user since the logging database was created. This report can help you determine if there are any reports that are not being used. If so, you may want to remove them.
Service requests metrics - day report
Shows percentage of successful and failed requests for IBM Cognos services for the current day. Includes a bar chart.
User session - abnormal termination
Shows logon date and time of abnormally terminated user sessions. It also includes a total of session termination for all dates.
You can select a date and time range.
User session - details
Shows user session details, including the logon time, logoff time, logoff operation, and session duration.
It also includes the total amount of session time for each user and the total amount of session time for all users.
You can select a date and time range and one or more users.
User session - logon errors for past 30 days chart
This audit report shows a bar graph of logon failures for the past 30 days.
User session - summary
This audit report shows the average session duration by user. It also shows the total average session duration by user.
You can select a date and time range and one or more users.
For more information, see the IBM Cognos Business Intelligence Administration and Security Guide.
Execute reports by tenant
Lists the tenant IDs and tenant users. This report provides package, report, and time stamp information.
Logon operations by tenant
Lists the logon actions for each tenant ID and provides the total number of logons for each user and tenant ID.
Report execution history by tenant
Lists the executed reports, timestamps, and the associated package names for a tenant. This report provides a summary of total activity and the report can by filtered for a specific tenant.
View reports by package and report
Lists users, reports, timestamps, and packages for the tenant that you select.
is defined as “mission critical” by many senior executives today.
Organizations are pressured constantly to understand
and react quickly to information. In addition, the complexity and
data for all aspects of the environments in which organizations operate
increasing. Markets, regulatory environments, customer and supplier
competitive information, and internal operational information all impact
data is viewed and interpreted.
The term 'Business Intelligence' was used as early
as September, 1996, when a Gartner Group report said:
By 2000, Information Democracy
will emerge in forward-thinking enterprises, with Business Intelligence
information and applications available broadly to employees, consultants,
customers, suppliers, and the public. The key to thriving in a competitive
marketplace is staying ahead of the competition. Making sound business
decisions based on accurate and current information takes more than intuition.
Data analysis, reporting, and query tools can help business users wade through
a sea of data to synthesize valuable information from it - today these tools
collectively fall into a category called "Business Intelligence."
Normally, Business intelligence
(BI) is seen as a broad category of applications and technologies for gathering,
storing, analyzing, and providing access to data to help enterprise users make
better business decisions. Business intelligence applications can be:
- Mission-critical and integral to an enterprise's
operations or occasional to meet a special requirement
- Enterprise-wide or local to one division, department,
- Centrally initiated or driven by user demand
Business Intelligence version 10.1 is the revolutionary new business intelligence release from IBM that
breaks down the barriers to analytics. It is revolutionary because it expands
traditional BI capabilities with planning, scenario modeling, real-time
monitoring, and predictive analytics. These capabilities deliver power in an
easy-to-use and unified experience that is collaboration and social networking
IBM Cognos Business Intelligence version 10.1 enables
organizations to gain all the perspectives they need to increase performance by
providing the following functions:
- Analytics that everyone can use to answer key business questions—sharpening
individual skills and improving business outcomes
- Collective intelligence to connect people and insights to gain
alignment—collapsing the time needed to align, decide, and act
- Actionable insight everywhere it is needed in real-time, mobile, and
business processes that enable instant response to changing business
Built on a proven technology platform, IBM Cognos
Business Intelligence version 10.1 is designed to upgrade seamlessly and
cost-effectively scale for the broadest of deployments. It provides you and
your organization the freedom to see more, to do more, and to make smart
decisions that drive better business results.
These capabilities for
authoring reports, viewing reports and modifying reports and queries that meet
all the needs of users, no matter where or how they work:
- Professional report authors can design, build and
securely distribute multilingual reports to the enterprise.
- Business users can easily create their own reports on
the fly or modify existing reports using trusted data without having to
- IT administrators can deploy, manage and expand the reporting
application from a central console that streamlines administrative tasks.
- The mobile workforce can access and interact with
reports on mobile devices, in Microsoft Office applications, in embedded
and in-process business intelligence and while disconnected.
meet the needs of all users in your organization with a single, intuitive
workspace. With this workspace, everyone can:
- Access all the information—at all angles and
perspectives—they need to drive informed decisions.
- Seamlessly transition to deeper analysis capabilities
and perform complex analysis tasks quickly and easily to get to the “why”
behind an event or action.
- Validate key information and drive business decisions
by incorporating statistical evidence in reports.
- Drill-down into and filter real-time data and
incorporate analysis of a broader range of alternative scenarios to build
- Deliver the power of predictive analytics into the
hands of business users.
With scorecards, you can
track performance based on key performance indicators (KPIs) to link corporate
strategy to operational tactics. Scorecards enable you to set quantifiable
goals for any time period and monitor progress on specific projects and
activities. You can create strategy maps, impact diagrams and other elements
and maintain metrics in a centralized data store to ensure consistent
Scorecards from IBM
business intelligence software help you:
- Align strategy with operations.
- Communicate strategy and track your progress.
- Ensure accountability for performance.
- Share with more user communities.
- Enjoy simple deployment and administration.
With IBM business
intelligence dashboard capabilities, you can:
- Analyze information and share
to follow a train of thought or generate a unique perspective.
- Assemble and format all kinds
of content by dragging-and-dropping, filtering, modifying and arranging
layout, adding colors and text and personalizing widgets. Change display,
add calculations, prompt, drill up and down and sort data with more
advanced dashboard capabilities.
- Distribute dashboards and integrate
them with IBM® Connections for improved collaboration and alignment.
- Access and interact with dashboards
regardless of language or location with mobile applications.
- Schedule, burst and
distribute professionally authored dashboards to a broad audience of
consumers including those who need disconnected access to their
5) Rich BI
in Mobile applications –
Mobile apps for business
intelligence make information available when and where it’s needed. Using IBM
business intelligence mobile apps for Apple, BlackBerry and Android, you can
interact with reports, analysis, dashboards and more on smart phones, tablets
and notebook computers. And, the IBM business intelligence platform can help IT
support these mobile apps and provide the same type of experience to everyone.
With IBM business
intelligence capabilities provided in mobile apps, you can:
- Experience insight wherever you are through quick and
simple access to business intelligence.
- Interact with information like never before in a
rich, visual and interactive experience.
- Confidently and easily deploy relevant and reusable
business intelligence to any device.
IBM Cognos® Active Report
provides an interactive analytics experience in a self-contained BI application
for browsing and exploring data offline. Mobile workers can take their data
with them to discover opportunities and analyze trends even when they are
nowhere near a network.
without having to rely on online connectivity. Access the business intelligence
you need while offline for uninterrupted productivity. Provide business
intelligence to individuals regardless of their location, situation or
Including statistics with
business reporting is critical for facilitating fact-based decision-making.
This can be a challenge because it often requires using different—and sometimes
disconnected—software. You need to be able to incorporate statistics into core
business reporting without having to struggle with multiple tools, the overhead
of exporting data to different systems or the complexities of bringing results
back together in a single output.
IBM business intelligence
software includes statistics capabilities that simplify the process of
incorporating statistical results with core business reporting, reducing the
time it takes to analyze data and prepare business presentations based on that
analysis. You can validate business information and drive business decisions by
adding statistical evidence to reports that can be delivered easily to broader
monitoring includes these features:
- Drill-down capabilities and
exception management. Users can quickly determine the root cause of an
- Business-defined alerts. Users can set watch points,
collaborate and alert all parties to sudden issues to address exceptions
based on business rules.
- Robust self-service query,
reporting and analysis. Users can create and author reports, query any data
source and analyze information on their own with a flexible, drag-and-drop
report authoring environment.
- Simplified administration. Administrators have a
central place to set up data sources and analytic models to present
current information, historical data or aggregated views.
- Centralized security. User, group and role-based
access provides access control to events, cubes, views, dimensions, rules,
alerts, portlets and dashboards.
With built-in collaboration and
social networking, you can harness the collective intelligence of your
organization to connect people and insights and gain alignment in your
organization and with key stakeholders. IBM business intelligence
software provides built-in collaboration and social networking capabilities to
fuel the exchange of ideas and knowledge that naturally occurs in decision-making
decision networks and expand the reach and impact of information
Share insights and solicit ideas with a broad set of social networking
capabilities provided by IBM® Connections. Integrated access to blogs, wikis
and message boards enable you to expand the reach and impact of your
information and gather input from different perspectives.
transparency and accountability to ensure alignment and consensus
Raise the value of your business intelligence by increasing user understanding
of the information it contains. IBM business intelligence software provides a
broad set of report-level features to help you better describe your data,
capture insight about it and prescribe how to use it.
coordinate tasks to engage the right people at the right time
Put better decisions into action with workflow and task management capabilities
that connect people with insights and coordinate decisions with activities in a
seamless, closed-loop environment. Assign ownership, manage initiatives and
10) Planning and budgets
For the best business
outcomes, your company needs to plan and forecast effectively. You must also
get the right information to the right people in the form they need it—and be able
to make changes quickly. For most companies, regardless of size or industry,
the corporate budgeting, forecasting and planning processes present a
formidable challenge. Finance professionals and line managers alike most often
describe annual planning and budgets as burdensome and time-consuming.
IBM enterprise planning
and financial analytics capabilities support a full range of business
requirements. From high-performance, on-demand customer and profitability
analysis and flexible modeling to enterprise contribution, these planning and
budgeting solutions meet the needs of manufacturing, sales and service,
finance, human resources, marketing and more. You can rapidly create, compare
and assess budgets, plans, business scenarios, conditions, drivers, rates and
assumptions and then evaluate what-if scenarios critical to forecasting future
In short, with Cognos Business Intelligence software from IBM, you can:
- Equip users with what they need to explore
information freely, analyze key facts, collaborate to gain alignment with
key stakeholders and make decisions with confidence for better business
- Provide quick access to facts with reports, analysis,
dashboards, scorecards, planning and budgets, real-time information,
statistics and the flexibility to manage information for more informed
- Integrate the results of “what-if” analysis modeling
and predictive analytics into your unified workspace to view possible
future outcomes alongside current and historical data.
- Support where users need to work with business
intelligence capabilities for the office and desktop, on mobile devices,
online and offline.
- Meet different analytics needs throughout your
business with solutions that are integrated and right-sized for
individuals, workgroups or midsize businesses and large organizations or
- Implement a highly scalable and extensible solution
that can adapt to the changing needs of IT and the business with flexible
deployment options that include the cloud, mainframes and data warehousing
- Start addressing your most pressing needs with the
confidence that you can grow your solution over time to meet future
requirements with the integrated Cognos 10 family of products.
You can download IBM Cognos BI Developer Edition (30 days trial) from below link -
You are also encouraged to view demo videos to understand core Cognos BI functionalities using below link -
In this blog we’ll develop Cognos BI reports using BigInsights (Hadoop distribution) along with warehouse data sources. “Data warehouse augmentation” is a Big Data use case of huge importance to the traditional analytics industry (Visit http://www.ibm.com/developerworks/library/ba-augment-data-warehouse1/index.html
to know more). To explore and implement a big data project, you can augment existing data warehouse environments by introducing one or more use cases given below, as the business requires.
This blog is directly helpful in case-3 however use of BigSQL would be used effectively in all 3 cases. In my previous blogs, we discussed Cognos BI in detail so if you are new probably you can check details here - http://www.ibm.com/software/products/en/business-intelligence. Below I am giving a brief description of BigInsights and BigSQL before we start integration steps.
IBM InfoSphere BigInsights (http://www.ibm.com/software/data/infosphere/biginsights/) combines Apache Hadoop (including the MapReduce framework and the Hadoop Distributed File Systems) with unique, enterprise-ready technologies and capabilities from across IBM, including Big SQL, built-in analytics, visualization, BigSheets, and security. InfoSphere BigInsights is a single platform to manage all of the data. InfoSphere BigInsights offers many benefits:
Provides flexible, enterprise-class support for processing large volumes of data by using streams and MapReduce
Enables applications to work with thousands of nodes and petabytes of data in a highly parallel, cost effective manner
Applies advanced analytics to information in its native form to enable ad hoc analysis
Integrates with enterprise software
) provides SQL access to data that is stored in InfoSphere BigInsight by using JDBC, ODBC, and other connections. Big SQL supports large ad hoc
queries by using IBM SQL/PL support, SQL stored procedures, SQL functions, and IBM Data Server drivers. These queries are low-latency queries that return information quickly to reduce response time and provide improved access to data. Big SQL
offers unmatched simplicity, performance and security for SQL on Hadoop. It provides a single point of access and view across all big data, exactly where it lives.
1) Setting up the environment with BigInsights 3.0, DB2 10.5 Warehouse (BLU) and Cognos BI 10.2.1 FP 3
2) Prepare data sources. Create tables and load data in warehouse and Hadoop environment.
3) Create Cognos data sources, meta-data model and a sample report.
Task 1 - Setting up the environment with BigInsights 3.0, DB2 10.5 (BLU) and Cognos BI 10.2.1 FP 3
In my case, all below software is installed on Radhat Enterprise Linux 6.3. In your case they all can be on different machines as well.
· For Cognos BI 10.2.1 setup you can either download free developer edition for Windows from IBM website (http://www.ibm.com/developerworks/downloads/im/cognosbi/) or use the installation steps given in my previous blog (http://vmanoria.blogspot.in/2014/08/ibm-cognos-bi-installation.html) if you have the software for Linux.
· If you don’t have licensed version for DB2 10.5 please download and install DB2 10.5 express edition (http://www-01.ibm.com/software/data/db2/express-c/download.html). Installation steps are shown here for Windows https://www.youtube.com/watch?v=2AtSEHC6iAQ
· For BigInsights 3.0 setup you can either download free QuickStart edition images from IBM website (http://www.ibm.com/developerworks/downloads/im/biginsightsquick/) or use the installation steps given in my previous blog (http://vmanoria.blogspot.in/2014/08/infosphere-biginsight-30-installation.html) if you have the software. If you are not using images then you also need to follow below steps.
· Copy BigSQL drivers in Cognos library folder and restart Cognos BI services.
cp /opt/ibm/biginsights/bigsql/bigsql1/jdbc/bigsql-jdbc-driver.jar /opt/ibm/cognos/c10_64/webapps/p2pd/WEB-INF/lib/
Task 2 - Prepare data sources. Create tables and load data in warehouse and Hadoop environment.
To keep the things simple we are going to work here with 3 tables – 1) Student 2) Student_Details and 3) Student_Facts. First two tables are being created in DB2 environment. Third table would be created BigInsights HDFS environment using BigSQL. After that we'll create Student_Details table in HDFS environment and load the data from DB2 DB using JDBC driver.
In DB2 BLU, lets create table - 1) Student 2) Student_Details and load data from csv files. Below commands are being run on RHEL shell.
[root@scekvm1 sample]# su db2inst1
[db2inst1@scekvm1 sample]$ ls
ER.jpg Exam.csv Old Performance.csv QBank.csv Student.csv Student_Details.csv StuFact.csv
[db2inst1@scekvm1 sample]$ db2 connect to gs_db
Database Connection Information
Database server = DB2/LINUXX8664 10.5.3
SQL authorization ID = DB2INST1
Local database alias = GS_DB
[db2inst1@scekvm1 sample]$ db2 -tvf db2ddl.sql
CREATE TABLE DB2INST1.STUDENT ( STUDENT_ID INTEGER NOT NULL, STUDENT_NAME VARCHAR (30)NOT NULL, YEAR_OF_ADMISSION INTEGER NOT NULL, SCHOOL VARCHAR (30)NOT NULL, CLASS VARCHAR (10)NOT NULL, SECTION VARCHAR (3) NOT NULL, HOSTELER VARCHAR (3) NOT NULL )
DB20000I The SQL command completed successfully.
CREATE TABLE DB2INST1.STUDENT_DETAILS ( STUDENT_ID INTEGER NOT NULL, DOB DATE NOT NULL, GENDER VARCHAR (2) NOT NULL, HOME_CITY VARCHAR (15) NOT NULL, HOME_STATE VARCHAR (3) NOT NULL, ADMISSION_CATEGORY VARCHAR (15) NOT NULL, SOCIAL_CATEGORY VARCHAR (15) NOT NULL, SCHOOL_CATEGORY VARCHAR (15) NOT NULL, NATIONALITY VARCHAR (15) NOT NULL, RELIGION VARCHAR (15) NOT NULL )
DB20000I The SQL command completed successfully.
[db2inst1@scekvm1 sample]$ db2 import from Student.csv of del messages msg.txt insert into student
Number of rows read = 1000
Number of rows skipped = 0
Number of rows inserted = 1000
Number of rows updated = 0
Number of rows rejected = 0
Number of rows committed = 1000
[db2inst1@scekvm1 sample]$ db2 import from Student_Details.csv of del messages msg.txt insert into student_details
Number of rows read = 1000
Number of rows skipped = 0
Number of rows inserted = 1000
Number of rows updated = 0
Number of rows rejected = 0
Number of rows committed = 1000
SQL3107W At least one warning message was encountered during LOAD processing.
Now in BigInsights, let us create Hadoop tables for Student_Details & Student_Facts. After that we’ll load data in Student_Details from DB2 and in Student_Facts from csv file. Before we start please make sure BigInsights is running. If not then please start it by running /opt/ibm/biginsights/bin/start-all.sh
Here we’ll use JSqsh. BigInsights supports a command-line interface for Big SQL through the Java SQL Shell (JSqsh, pronounced “jay-skwish”). JSqsh is an open source project for querying JDBC databases. You may find it handy to become familiar with basic JSqsh capabilities, particularly if you don’t expect to have access to an Eclipse environment at all times for your work. Below commands are being run on RHEL shell.
[root@scekvm1 sample]# su biadmin
[biadmin@scekvm1 sample]$ cd /opt/ibm/biginsights/jsqsh/bin/
[biadmin@scekvm1 bin]$ ls
[biadmin@scekvm1 bin]$ ./jsqsh bigsql
WARN [State: ][Code: 0]: Statement processing was successful.. SQLCODE=0, SQLSTATE= , DRIVER=3.67.33
JSqsh Release 2.1.2, Copyright (C) 2007-2014, Scott C. Gray
Type \help for available help topics. Using JLine.
Just copy & paste below commands on JSqsh prompt to create tables -
CREATE HADOOP TABLE IF NOT EXISTS STUDENT_FACTS (
STUDENT_ID INTEGER NOT NULL,
ATTENDANCE INTEGER NOT NULL,
FEE_COLLECTED INTEGER NOT NULL,
FEE_BALANCE INTEGER NOT NULL,
MARKS INTEGER NOT NULL
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\t'
LINES TERMINATED BY '\n'
STORED AS TEXTFILE
CREATE HADOOP TABLE IF NOT EXISTS STUDENT_DETAILS (
STUDENT_ID INTEGER NOT NULL,
DOB DATE NOT NULL,
GENDER VARCHAR (2) NOT NULL,
HOME_CITY VARCHAR (15) NOT NULL,
HOME_STATE VARCHAR (3) NOT NULL,
ADMISSION_CATEGORY VARCHAR (15) NOT NULL,
SOCIAL_CATEGORY VARCHAR (15) NOT NULL,
SCHOOL_CATEGORY VARCHAR (15) NOT NULL,
NATIONALITY VARCHAR (15) NOT NULL,
RELIGION VARCHAR (15) NOT NULL
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\t'
LINES TERMINATED BY '\n'
STORED AS TEXTFILE;
You'll get response like this -
0 rows affected (total: 0.49s)
Now lets load the data in table STUDENT_FACTS from the csv file. Here's the command -
LOAD HADOOP USING FILE URL
WITH SOURCE PROPERTIES ('field.delimiter'=',')
INTO TABLE STUDENT_FACTS OVERWRITE;
On successful completion, you'll get response like this -
WARN [State: ][Code: 5108]: The LOAD HADOOP statement completed. Number of rows loaded into the Hadoop table: "1000". Total number of source records: "1000". If the source is a file, number of lines skipped: "0". Number of source records that were rejected: "0". Job identifier: "job_201409242156_0009".. SQLCODE=5108, SQLSTATE= , DRIVER=3.67.33
0 rows affected (total: 21.26s)
Now lets load data in table STUDENT_DETAILS from DB2 database table we created earlier.
LOAD HADOOP USING JDBC CONNECTION URL
WITH PARAMETERS (user = 'db2inst1',password='db2inst1')
FROM TABLE STUDENT_DETAILS SPLIT COLUMN STUDENT_ID
INTO TABLE STUDENT_DETAILS APPEND
WITH LOAD PROPERTIES ( 'num.map.tasks' = 1)
Task-3) Create Cognos data sources, meta-data model and a sample report.
Let’s quickly create two data source connections from Cognos Administration interface. One with DB2 database GS_DB in our case and another JDBC connection using “IBM Infosphere BigInsights (Big SQL)” as shown below. Provide valid sign-on details and test the connection.
Now let’s open Framework Manager and pull the tables from respective sources. Create relationships between them and set the query items properties correctly. Like in my case I changed usage property of ‘Student_ID’ with ‘identifier’ which previously was ‘fact’ due to its integer data type. Before you create and publish the package, just test if the aggregate data is coming out correctly. Now you can create a package and publish it on Cognos Connection.
Now you are ready to create your report using Report Studio.
Cognos Business Intelligence 10.2 reporting on InfoSphere BigInsights (Using Hive)
Big data and data warehouse augmentation
Use big data technologies as a landing zone for source data
Use big data technology for an active archive
Use big data technologies for initial data exploration
Whats the big deal about Big SQL?
InfoSphere BigInsights is IBM’s bigdata offering to help organizations discover and analyze business insights hidden in large volumes of a diverse range of data – data that’s often ignored or discarded because it’s too huge, impractical or difficult to process using traditional means. Examples include log records, click streams, social media data, news feeds, emails, electronic sensor output, and even transactional data.
BigInsights brings the power of open source Apache Hadoop project to enterprise. In addition, there are a number of IBM value-add components that make up this Enterprise Analytics platform. These value-adds are in the areas of analysis and discovery, security, enterprise software integration, administrative and platform enhancements. For more details please visit below URL.
You can also download no-charge Quick Start Edition of IBM Infosphere BigInsight.
In this blog we’ll see steps involved in BigInsights installation and configuration on RHEL. There are three major parts to it.
1) Meet the pre-requisites (Hardware & Software)
2) Complete pre-installation activities
3) Install BigInsights 3.0
Meet the pre-requisites (Hardware & Software)
Let’s start with step -1. You can go thru standard supported environment specification on IBM site (http://www-01.ibm.com/support/docview.wss?uid=swg27027565). Here I am going to install single-node BigInsights 3.0 on RHEL 6.4 system with the specification shown in below screenshot.
We need to verify or install the Expect, Numactl, and Ksh Linux packages. One way to get these libraries is to download them independently from various Linux websites and install them. The other and probably the better way is to use your OS (RHEL 6.4 in this case) disk or .ISO image for the process. I am going to use the second option here. First I copied “RHEL6.4-20130130.0-Server-x86_64-DVD1.iso” file in /data folder (newly created) then mounted it as /media and update repository.
mount -oloop RHEL6.4-20130130.0-Server-x86_64-DVD1.iso /media
rpm --import /media/*GPG*
yum clean all
Next step is to verify that the Expect, Numactl and Ksh Linux packages are installed.
rpm -qa | grep expect
rpm -qa | grep numactl
rpm -qa | grep ksh
If the packages are not installed, then run the following command to install them.
yum install expect
yum install numactl
yum install ksh
Now we are ready for step-2.
Complete pre-installation activities
In addition to product prerequisites, there are tasks common to all InfoSphere BigInsights installation and upgrade paths. You must complete these common tasks before you start an installation or upgrade.
Task – 1) Ensure that adequate disk space exists for these directories - / (10GB), /tmp (5GB), /opt (15GB), /var (5GB) & /home (5GB).
Task – 2) Check that all devices have a Universally Unique Identifier (UUID) and that the devices are mapped to the mount point
Before you edit /etc/fstab, save a copy of the original file.
Task – 3) Create the biadmin user and group.
// Add the biadmin group.
groupadd -g 123 biadmin
// Add the biadmin user to the biadmin group.
useradd -g biadmin -u 123 biadmin
//Set the password for the biadmin user.
//add the biadmin user to the sudoers group.
sudo visudo -f /etc/sudoers
Find out and add ‘#’ to comment below line if its not there
# Defaults requiretty
Also add these lines just below “# %wheel ALL=(ALL) NOPASSWD: ALL” line
biadmin ALL=(ALL) NOPASSWD:ALL
root ALL=(ALL) NOPASSWD:ALL
Open the /etc/security/limits.d/90-nproc.conf file and add below lines.
@biadmin soft nofile 65536
@biadmin soft nproc 65536
@root soft nofile 65536
@root soft nproc unlimited
Open the /etc/security/limits.conf file and add below lines.
Task – 4) Configure your network.
Edit the /etc/hosts to include the IP address, fully qualified domain name. The format is IP_address domain_name short_name. For example,
127.0.0.1 localhost.localdomain localhost
172.21.6.151 bda.iicbang.ibm.com bda
Edit the /etc/resolv.conf to include the nameservers
Save your changes and then restart your network.
service network restart
We need to configure passwordless SSH for the root and biadmin.
ssh-keygen -t rsa (When asked select the default file storage location and leave the password blank.)
ssh-copy-id -i ~/.ssh/id_rsa.pub firstname.lastname@example.org
Ensure that you can log in to the remote server without a password.
Repeat this SSH setting process for biadmin user also.
Run the following commands in succession to disable the firewall.
service iptables save
service iptables stop
chkconfig iptables off
Now disable IPv6 –
echo “install ipv6 /bin/true” >> /etc/modprobe.d/disable-ipv6.conf
Edit the /etc/sysconfig/network file and append the following lines.
Edit /etc/sysconfig/network-scripts/ifcfg-eth0 (assuming eth0 is used for networking) and add these lines –
Append following lines at the end of /etc/sysctl.conf file.
net.ipv6.conf.all.disable_ipv6 = 1
kernel.pid_max = 4194303
net.ipv4.ip_local_port_range = 1024 64000
Restart your machine.
Verify that IPv6 is disabled.
IPv6 is disabled if all lines containing inet6 are not listed in the output.
Task – 5) Synchronize the clocks of all servers using Network Time Protocol (NTP) source.
Add below line in /etc/ntp.conf
server 172.21.4.40 iburst
Update the NTPD service with the time servers that you specified.
chkconfig --add ntpd
Start the NTPD service.
service ntpd start
Verify that the clocks are synchronized with a time server.
Step – 6) Run the pre-installation checker utility to verify that your Linux environment readiness
I have copied BigInsights software copy in /data folder. Let’s unzip it.
tar -xvf IS_BigInsights_EE_30_LNX64.tar.gz
We must run and pass all bi-prechecker.sh tests before start BigInsights installation. Before that let’s create a file containing your host name.
Echo “bda.iicbang.ibm.com” > hostlist.txt
./bi-prechecker.sh –m ENTERPRISE –f hostlist.txt –u biadmin
If all the checks are [ OK ] then we are ready for next step. If there are [FAILED] entries then go thru the log file created by utility in the same folder and correct it.
Install BigInsights 3.0
Let’s start installation steps which are pretty easy if previous steps are completed successfully.
Navigate to the directory where you extracted the biginsights
Run the start.sh script.
The script starts WebSphere Application Server Community Edition on port 8300. The script provides you with a URL to the installation wizard. In my case I received -
Open it in the browser. On the License Agreement panel, accept the license agreement and then click Next.
On the Installation Type panel, select Cluster installation, select the check box to Create a response file and save your selections without completing an installation, and then click next.
On the File System panel, enter a name for your cluster (BICluster is default), select Install Hadoop Distributed File System (HDFS), enter the mount point where you want to install HDFS, and then click Next. You can choose other file system also.
On the 'Secure Shell' panel, select the user (root in my case) that you want to install with, enter any required information, and then click Next.
On the 'Nodes' panel, click your node to use for HDFS. I can see bda.ibm.com listed here.
Next, on 'Components 1' screen, pass on ‘catalog’ and ‘bigsql’ password whatever you desire to keep.
Click Next on the remaining panels until you reach the Summary panel. On the Summary panel, click Create response file. The installation program displays the location where your response file is saved. Take note of this location so that you can easily locate your response file after you install HDFS and are ready to install InfoSphere BigInsights.
Make sure you can see all the services running on your node on ‘results’ panel.
Next it’ll take you to BigInsights Console screen. That shows your installation is successfully completed. You can browse information from Welcome tab and decide your next action.
Now if you want to add more nodes in the cluster, prepare them and add from Cluster Status tab.
To stop all the services, run below command -
Similarly there is ./start-all.sh
to start all the services.
We also need to install “IBM InfoSphere BigInsights Eclipse tools”
for developing and deploying applications to the BigInsights server and writing programs using Java MapReduce, JAQL, Pig, Hive and BigSQL. First of all download Eclipse 4.3 + from www.eclipse.org
. Then, add the http://<server>:<port>/updatesite/
URL to your Eclipse Software Updater (Help Menu -> Install) as shown below. Select the location and all entries under the IBM InfoSphere BigInsights category. Then simply follow the steps to install the InfoSphere BigInsights plugins.
Planning to install InfoSphere BigInsights 3.0
Preparing to install InfoSphere BigInsights 3.0
Installing Infosphere BigInsights 3.0
BigInsight 3.0 Tutorials
Those who normally work with Cognos BI on Windows Server, find it difficult to install and configure on Linux. In this blog we’ll see steps involved in this installation and configuration on RHEL. There are three parts to it.
1) Meet the pre-requisites (Hardware & Software)
2) Install and configure Cognos BI Server components
3) Install and configure HTTP server
Cognos Framework Manager and Transformer are client tools and must be installed on Windows.
Meet the pre-requisites (Hardware & Software)
Let’s start with step -1. You can go thru standard supported environment specification on IBM site (http://www-01.ibm.com/support/docview.wss?uid=swg27037784). Here I am going to install Cognos BI V 10.2.1 on RHEL 6.4 system with the specification shown in below screenshot.
We need to ensure installation of required patches before we start Cognos installation:
glibc-2.12-1.80.el6 (both ppc and ppc64 packages) - 32 and 64 bit glibc libraries
libstdc++-4.4.6-4.el6 (both ppc and ppc64 packages) - 32 and 64 bit libstdc++ libraries
nspr-4.9-1.el6 (both ppc and ppc64 packages) - 32 and 64 bit nspr library for CAM ldap provider
nss-3.13.3-6.el6 (both ppc and ppc64 packages) - 32 and 64 bit nss library for CAM ldap provider
openmotif-2.3.3-4.el6 (both ppc and ppc64 packages) - 32 and 64 bit openmotif libraries
One way to get these libraries is to download them independently from various Linux websites and install them. The other and probably the better way is to use your OS (RHEL 6.4 in this case) disk or .ISO image for the process. I am going to use the second option here. First I copied “RHEL6.4-20130130.0-Server-x86_64-DVD1.iso” file in /data folder (newly created) then mounted it as /media and update repository.
mount -oloop RHEL6.4-20130130.0-Server-x86_64-DVD1.iso /media
rpm --import /media/*GPG*
yum clean all
Now to check if glibc package is already installed or not, use below command:
rpm –qa | grep glibc
If package is installed you’ll get file list (name ending with .x64_64 or .i686) in return otherwise we need to install it including dependencies using below command:
yum install glibc.i686 // For 32-bit
yum install glibc.x86_64 // For 64-bit
Repeat the same process for libstdc++, nspr, nss, openmotif.
We also need to have JDK 7 installed as prerequisite. I am downloading IBM JDK 7 from IBM site (http://www.ibm.com/developerworks/java/jdk/linux/download.html ) for Linux 64-bit environment and install it as shown below.
It is installed in /opt/ibm/java-x86_64-71. Now we are ready for step-2.
Install and configure Cognos BI Server components
As shown below in snapshot, I have copied these 5 server components –
· Cognos BI Server 10.2.1
· Cognos BI Samples (optional)
· Cognos SDK (optional)
· Cognos Mobile (optional)
· Cognos Dynamic Query Analyzer (optional)
First unzip the package using below command –
tar –xvf bi_svr_10.2.1_l86_ml.tar.gz
It would open GUI based installation wizard, as shown below -
From here steps are self explanatory. I am selecting all four components from ‘Component Selection’ screen as I want all on my single server. By default ‘Cognos Content Database’ is not selected. In case, you plan to create content store somewhere else you can go ahead without it.
Once the installation is over you can go ahead with Cognos Samples, SDK, Mobile and other server components. Add the bcprov-jdk14-134.jar from /cognos/c8_64/bin64/jre/1.5.0/lib/ext/bcprov-jdk14-134.jar to the $JAVA_HOME/lib/ext path
cp /opt/ibm/cognos/c10_64/bin64/jre/7.0/lib/ext/bcprov-jdk14-145.jar /opt/ibm/java-x86_64/jre/lib/ext
We also need to add JAVA_HOME in cogconfig.sh file before opening Cognos Configuration tool.
And add below line as first executable command of cogconfig.sh file.
Save it and run it to start Cognos service.
Install and configure HTTP server
You can choose your choice of HTTP server here. I am using IBM HTTP Server (32-bit). It is no-charge and can be downloaded from –
Unzip downloaded package, update JAVA_HOME in “IHS/install” file and run it.
tar –xvf ihs.7000.linux.ia32.tar
It’ll open GUI wizard for installation.
After successful installation we’ll add necessary virtual directories in configuration file.
Add below lines in httpd.conf
# Cognos #
# Load Cognos Apache 2.2 Module
LoadModule cognos_module "/opt/ibm/cognos/c10_64/cgi-bin/lib/mod2_2_cognos.so"
# Add WebDAV lock directory, make sure the LoadModule dav_module modules/mod_dav.so and LoadModule dav_fs_module modules/mod_dav_fs.so are uncommented
# Alias for the cgi scripts
ScriptAlias /ibmcognos/cgi-bin "/opt/ibm/cognos/c10_64/cgi-bin/"
Allow from all
# Alias for the Cognos webcontent folder
Alias /ibmcognos "/opt/ibm/cognos/c10_64/webcontent"
Options Indexes FollowSymLinks MultiViews IncludesNoExec
AddOutputFilter Includes html
Allow from all
Find out below lines and add ‘#’ in the beginning to comment them.
# LoadModule was_ap22_module /opt/IBM/HTTPServer/Plugins/bin/32bits/mod_was_ap22_http.so
# WebSpherePluginConfig /opt/IBM/HTTPServer/Plugins/config/webserver1/plugin-cfg.xml
We can save it now. We’ll create two file – one two start the server and another to stop it.
Create startIHS.sh with below code –
And stopIHS.sh with –
Copy both files in /opt/IBM/HTTPServer/bin and run startIHS.sh to start the server.
Here’s your Cognos BI server ready for use. Open “http://localhost:80/ibmcognos
” on local machine or use IP address/hostname instead of ‘localhost’. In my case, HTTP server is running on 80 port number (default).
If you want to upgrade it with fix pack 3 which is latest please get it from below link and install it.
IBM Cognos 10.2.1 official documentation (Knowledge Center)
Business Intelligence Installation and Configuration Guide
Business Intelligence Architecture and Deployment Guide
Why Visualization Matters in Analytics World?
The world produces more than 2.5 exabytes of data every day. Visualization is one key approach to gaining insight from this mountain of data. It enables you to see the trends and patterns (along with gaps and outliers) in the data that are not as easily identified in rows and columns of numbers.
Visualization can also provide access to huge data sets, such as weather, web traffic, sales and voting records. Data sets of this size have the potential to be overwhelming and inaccessible; a good visualization provides a way to explore, understand and communicate the data, along with actions the data indicate should be taken. For example, there are visualizations that show past prices for airline tickets and prices for future travel dates. From this sort of visualization, you can see whether ticket prices are trending upwards or downwards and you have guidance when considering tickets for different possible travel dates. A list of ticket prices wouldn’t be nearly as compelling or useful as showing the price variations with graphs.
The most successful visualizations have a few things in common. They have a clear purpose; they include only relevant, focused content; and they present the content in a manner that reveals and highlights the interesting relationships in the data.
Extensible visualization capabilities in IBM Cognos BI
New, extensible visualization capabilities built into IBM Cognos Business Intelligence V10.2.1.1 solutions unleash report authors and business users from a static library of charts. No longer are users locked into only the in-product visualization options. Now you can easily augment your reports—both general and active reports—with a growing collection of visualizations to meet your data and insight requirements.
Through the new Visualization Marketplace on IBM Analytics Zone, report authors can choose from over 30 visualization options, ranging from radar charts to heat maps and area charts. New visualization options will be regularly added to the Marketplace. You’ll always find just the right visualization just a simple download away.
Step-by-step process to extend Cognos BI visualization features -
First of all please make sure you have Cognos BI 10.2.1.1 server installed. If you have 10.2.1 then download and install 10.2.1.1 fixpack from IBM support site
Now download all visualizations (vis.sample.all_10_2_1_1 .zip) along with documentations from AnalyticsZone site using below link. To do so you must be a member. If you are not then register yourself, its easy :).
The next step is to import these visualizations into Cognos library before we use them thru report studio.
1) Unzip vis.sample.all_10_2_1_1 .zip into a folder. You’ll get 30 zip files for 30 visualizations.
2) Open “IBM Cognos Administration” in browser and go to “Library” tab.
3) In left side panel you can see “Visualizations” item. On right side, click on “Import Visualizations” button as shown in below snapshot.
4) It’ll open a window where you can import all 30 or some visualization files one-by-one.
5) On successful import you can see all imported files.
6) Now open Report Studio with any package. From toolbar you should drag “Visualization” tool in report area. Here you can see all 30 visualizations and select any one of your choice. Align data items and run the report. You can also use them in your active reports.
IBM Cognos Business Intelligence (BI) is a enterprise class, web-based, integrated business intelligence suite by IBM which provides toolset not only traditional BI capabilities like reporting, analysis, scorecarding, monitoring of events and metrics but also expands these capabilities with planning, scenario modeling, real-time monitoring, and predictive analytics. These capabilities deliver an easy-to-use and unified experience that is collaboration and social networking enabled. The IBM Cognos BI has Service-oriented architecture - designed for scalability, availability, and openness.
IBM Tivoli Directory Server (TDS) is a powerful and authoritative enterprise directory infrastructure that is a critical enabler for enterprise security. It is an important part of the IBM Security Integrated Identity Management portfolio. It plays a key role in building the enterprise identity data infrastructure for applications such as identity management, portals, and web services. It provides a server that stores directory information using a DB2 database. It also provides a proxy server for routing LDAP operations to directory servers with database. IBM Security Directory Server provides client utilities and graphical user interfaces (GUI), such as Instance Administration Tool (idsxinst) and Configuration Tool (idsxcfg), to manage servers.
IBM Tivoli Directory Server provides:
Industry-standard architecture and broad platform support for a range of operating systems and applications and a variety of heterogeneous environments.
Strong scalability and flexibility to support hundreds of millions of entries using IBM DB2 technology and a built-in proxy-server.
Availability to support an identity data infrastructure for global online applications such as consumer-driven web services.
The ability to help you manage identities in the cloud.
Robust auditing and reporting that provides insight with connectivity to IBM QRadar SIEM and greater visibility into repository with sample reports.
You can use IBM TDS to provide a trusted identity data infrastructure for authentication. As we know Cognos BI doesn’t provide its own authentication mechanism but leverage your existing mechanism which you are using across enterprise applications. In this blog article our objective is to leverage existing security features for authentication and data transfer of TDS based LDAP with IBM Cognos BI to order to secure BI assets and setup multi-tenancy environment.
This blog article describes the step by step procedure for –
<>1)Setting up TDS 6.2 environment on Windows 7 OS
<>2)Integrating IBM Cognos BI 10.2.1 Server with TDS 6.2.
<>3)Enable Multitenancy for Cognos BI environment
Also see –
Setting up TDS 6.2 Environment on Windows 7 OS
<>2)On the completion of installation, you can see ‘IBM Tivoli Directory …’ windows services (Start->Programs->Administrative Tools->Services). The default port used by TDS for LDAP service is 389.
<>3)To create and manage directory instances click on “Instance Administration Tool” from “IBM Tivoli Directory Server 6.2” folder in Start Menu - > All Programs as shown in snapshot.
<>4)Click on “Manage…” button. It’ll open TDS Configuration Tool. Besides getting info about your setup you can also perform many tasks listed on left side panel as shown in below snapshot. Click of “Manage suffixes” task.
<>5)We need to add “dc=example,dc=com” as a new suffix before importing our example LDIF. After successful addition you would see it in “Current suffix DNs” list.
<>6)Below given is the glimpse of sample LDIF, you can download the attachment (http://www.megafileupload.com/en/file/521432/IBM-TDS62-ldif.html) and change is as per your requirements. I’ve created 11 users having userid admin, user1 – user10 with password – “password”. Lets click on “Import LDIF data”.
<>7)Import sample LDIF file.
<>8)On successful restoration start the server instance from “Manage Server State” task on the left side, shown in below snapshot.
Integrating IBM Cognos 10.2.1 BI Server with TDS 6.2
It is assumed that Cognos 10.2 BI server is already installed and is in working condition. Open ‘IBM Cognos Configuration’ from Start -> All Programs -> IBM Cognos 10 – 64.
<>1)In the Explorer window, under Security, right-click Authentication, and then click New resource -> Namespace.
In the Name box, type a name for your authentication namespace (we used ‘IBM_TDS62’ here) and in the Type list, select ‘LDAP – Default values for IBM Tivoli’ and click OK.
<>2)Select the newly created namespace. In the ‘Resource Properties’ window in right, for the Namespace ID property, specify a unique identifier for the namespace as TivoliLDAP is assigned in the below screenshot. All entries with Red arrows are manually provided to integrate with the TDS environment we created in above section.
<>3)If you want the TDS to bind to the directory server using a specific Bind user DN (Distinguished Name) and password when performing searches, then specify these values.
If no values are specified, the LDAP authentication provider binds as anonymous.
If external identity mapping is enabled, Bind user DN and password are used for all LDAP access. If external identity mapping is not enabled, Bind user DN and password are used only when a search filter is specified for the User lookup property.
<>4)You can use user attributes from TDS in namespace configuration. To configure this, you must map these attributes with appropriate property name as shown in below snapshot. ‘Custom properties’ would be available as session parameters through Framework Manager.
<>5)From the File menu, click Save. Test connectivity to the namespace by right clicking on the name under Security, Authentication and selecting test. If the test is successful, this message box will appear.
If you want to disable anonymous access, make sure you disable it by setting ‘Allow anonymous access?’ property for ‘Cognos’ namespace as shown below in snapshot.
<>6)Restart Cognos service from toolbar.
<>7)Now anyone who wants to access Cognos (http://localhost/ibmcognos), would be asked for authentication credential. Let us login with LDAP administrator credential.
Directory administrators would have Cognos admin privileges. Go to Cognos administration.
<>8)In ‘IBM Cognos Administration’, explore ‘Users, Groups, and Roles’ under ‘Security’ tab. One can see the new namespace (IBM_TDS62). Click on it to view all users belongs to the directory.
Administrator now can assign different privileges and roles to these directory users as per application security requirements by setting relevant properties. Once security permissions are assigned, LDAP users are ready to use Cognos BI. For more information on security, please refer to “IBM Cognos BI Administration and Security Guide
Enable Multitenancy for Cognos BI environment
1) We need to set multitenant properties from IBM Cognos Configuration tool to enable this feature. In IBM Cognos Configuration tool, select Security->Authentication->IBM_TDS62 in Explorer (left pane) window. Now select ‘Advanced Properties’ from right window (Resource properties) and add two new values before pressing OK button -
<>a)Name – ‘multitenancy.TenantPattern’ value – ‘~/parameters/tenantID’
<>b)Name – ‘AdditionalUserPropertiesToQuery’ value – ‘parameters’
2) Now, select ‘Custom Properties’ from right window (Resource properties) and add a new value –
Name – ‘tenantID’ value – ‘l’
3) From the File menu, click Save. Test connectivity to the namespace by right clicking on the name under Security, Authentication and selecting test. If the test is successful, this message box will appear.
4) Save the configuration and restart Cognos service. Your Cognos multitenancy feature is enabled.
There are many tasks follows this step to realize benefits of multitenancy in BI project. Please refer to my previous blog article http://vmanoria.blogspot.in/2014/03/ibm-cognos-bi-setting-up-multi-tenancy.html to see how to manage/administrate multi-tenant environment.
Stream computing delivers real-time analytic processing on constantly changing data in motion. It enables descriptive and predictive analytics to support real time decisions. Stream computing allows you to capture and analyze all data - all the time, just in time. Relational databases and warehouses find information stored on disk. Streams analyses data before you store it. Key points here are -
1) Stream is the right capability when the primary big data challenge is analyze data that is in motion (Velocity) – because the business imperative requires a real-time response/action based on analyzing the data or the data is very large and want to more cost-effectively filter and remove data before moving into your data warehouse or Hadoop system. It can handle continuous or bursty streams of data – millions of events per second with microsecond latency.
2) Streams can process any type of data (Variety) – audio, video, network logs, sensors, social media such as Twitter, in addition to structured data.
3) And, Streams is designed to scale to process any size of data from Terabytes to Zetabytes per day
Stream computing changes where, when and how much data you can analyze. Store less, analyze more, and make better decisions, faster with stream computing. The benefits of streaming analytics are immediately obvious. Dramatic cost savings by analyzing data and only storing what is necessary. The ability to detect and make real-time decisions, results in customer retention to detect fraud to cross-selling a product.
IBM InfoSphere Streams for Stream Computing
IBM InfoSphere Streams is an advanced analytic platform that allows user-developed applications to quickly ingest, analyze and correlate information as it arrives from real-time sources. InfoSphere Streams is designed to handle very high data throughput rates, up to millions of events per second. A market leader in providing sophisticated analytics for IoT, IBM received the 2013 Ventana Research award for Operational Intelligence in the IT Innovation category for InfoSphere Streams.
Core highlights are -
Perform advanced real-time analytics on data in motion
Rapidly ingest, correlate and continuously analyze a massive volume and variety of structured and unstructured streaming data as it arrives from thousands of sources
Make real-time predictions and discoveries as data arrives
Visualize data easily with drag-and-drop development tools
Detect and respond to critical events immediately
Learn and update models for future analysis and trend prediction with cognitive computing
InfoSphere Streams helps you:
Analyze data in motion—provides sub-millisecond response times, allowing you to view information and events as they unfold. Tools facilitate sophisticated analytics, such as geospatial, voice, image and text, and also update models on the fly.
Simplify development of streaming applications—uses an Eclipse-based integrated development environment (IDE). Developers are able to easily and rapidly build applications and connect to new data sources. Drag-and-drop editors, wizards, visualization tools, and runtime monitoring and debuggers are available.
Extend the value of existing systems—integrates with your applications, and supports both structured and unstructured data sources. The supporting infrastructure adapt to rapidly changing data formats, types and messaging protocols. It also read from and writes to a vast number of data sources. A massively parallel architecture is designed to deliver unlimited compute potential.
IBM Infosphere Streams capabilities are designed to work together and with existing bigdata & analytics applications such as BI and predictive analytics. Here’s an example scenario:
<>1)Historic data is stored in the DB/warehouse (DB2, Infosphere Warehouse, Informix, Oracle, solidDb, MySQL, SQLServer, Netezza etc.) where interesting patterns are detected using database toolkit operators, such as the pattern of credit card transactions that would indicate possible fraud. Support for XML allows developers to fuse a broader range of traditional and untraditional data.
<>2)IBM SPSS leverages IBM SPSS Modeler to develop and build predictive models, and then deploy them using the SPSS Scoring Operator. The PMML models are then imported into InfoSphere Streams Studio to generate Streams programs that are executed to score the incoming records in real time without suspending InfoSphere Streams applications.
<>3)Additional data sources such as RFID tags, blogs, or other information might be used to improve the confidence levels of the scoring algorithms.
<>4)These measures can be sent to Dashboards like IBM Cognos Real Time Monitoring or business process management (BPM) systems to trigger business processes to take immediate action as required.
<>5)IBM InfoSphere BigInsights lets you store streaming data in an enterprise-class Hadoop environment for additional analysis or historic retention. InfoSphere Streams and InfoSphere BigInsights use the same advanced text analytics capabilities to simplify natural language processing applications for both data in motion and data at rest. In addition, InfoSphere BigInsights can be used to augment streaming sources with contextual information, and users can visualize InfoSphere Streams data in the InfoSphere BigInsights console.
<>6)Streams real-time analytics can be integrated with ETL solutions like IBM DataStage helps get more timely results and offload some analytics load from the warehouse. IBM InfoSphere DataStage helps users perform deep analysis and gain additional insight using contextual and source data from other parts of the infrastructure.
<>7)Messaging queues allow InfoSphere Streams to receive data from or send data to IBM WebSphere MQ, IBM MessageSight and Java Messaging System (JMS) offerings.
8) IBM InfoSphere Data Explorer enables users to visualize InfoSphere Streams data in the InfoSphere Data Explorer CXO dashboard and add streaming data to the InfoSphere Data Explorer index.
Stream computing use cases
When companies can analyze ALL of their available data, rather than a subset, they gain a powerful advantage over their competition. Many customers are seeing tangible ROI using IBM Stream solutions to address their big data challenges:
Healthcare: 20% decrease in patient mortality by analyzing streaming patient data
Telco: 92% decrease in processing time by analyzing networking and call data
Utilities: 99% improved accuracy in placing power generation resources by analyzing 2.8 petabytes of untapped data
Below are few cross-industry scenarios best suitable for stream computing –
<>1)Know Everything about your Customers
<>·Social media customer sentiment analysis
<>·Multi-channel interaction analysis
<>·Loyalty program analytics
<>2)Innovate New Products at Speed and Scale
<>·Social Media - Product/brand Sentiment analysis
<>·RFID tracking & analysis
<>·Transaction analysis to create insight-based product/service offerings
<>3)Instant Awareness of Risk and Fraud - Lower risk, detect fraud and monitor cyber security in real time. Augment and enhance cyber security and intelligence analysis platforms with big data technologies to process and analyze new types (e.g. social media, emails, sensors) and sources of under-leveraged data to significantly improve intelligence, security and law enforcement insight.
<>·Fraud modeling & detection
<>·Risk modeling & management
<>4)Exploit Instrumented Assets
<>·Asset management and predictive issue resolution
<>·IT log analysis
<>5)Run Zero Latency Operations
<>·Smart Grid/meter management
<>·Distribution load forecasting
<>·Inventory & merchandising optimization
<>·ICU patient monitoring
<>·Transportation network optimization
Here’s few usecases in industries to get an idea about the breadth of possibilities that stream technology along with other bigdata products can offer. To explore more details, click on the industry title below.
Data warehouse optimization
Predictive asset optimization
Actionable customer insight
Optimize offers and cross sell
Contact center efficiency and problem resolution
Payment fraud detection and investigation
Counterparty credit risk management
Optimized promotions effectiveness
Micro-market campaign management
Real-time demand forecast
Distribution load forecasting and scheduling
Create targeted customer offerings
Enable customer energy management
Smart meter analytics
Threat prediction and prevention
Social program fraud, waste and errors
Tax compliance - fraud and abuse
Crime prediction and prevention
Measure and act on population health
Engage consumers in their healthcare
Health monitoring and intervention
Knowing the order of events can have profound impacts, for example in predicting the path of a natural disaster or picking the next best stock trade. InfoSphere Streams helps insurance companies plan for natural disasters and enables real-time public alerts. It also performs real-time analysis of sensor data collected from the Hudson River, one of the most instrumented bodies of water in the world. Check this out - https://www.youtube.com/watch?v=y3CZQOtVx6s&list=PLA98824D75176BAEB&index=18
Claims fraud detection
Next best action and customer retention
Catastrophe risk modeling
Advanced condition monitoring
Drilling surveillance & optimization
Production surveillance & optimization
Actionable customer insight
Telecommunications service providers continue to experience a huge growth in smartphone and mobile device use. Growing text and data usage creates a deluge of context- and time-sensitive data. InfoSphere Streams enables telecommunications providers to analyze billions of call data records per day to detect fraud, ensure high asset utilization and create accurate customer profiles for heightened customer service and retention. Using InfoSphere Streams, Sprint reduced storage costs by 90 percent. Check this out - https://www.youtube.com/watch?v=eg8KSLAZ2HM&feature=player_embedded
Pro-active call center
Customer analytics and loyalty marketing
Capacity & pricing optimization
Predictive maintenance optimization
What is Predictive Analytics?
Predictive Analytics is the transformational technology that enables more proactive decision making, driving new forms of competitive advantage by analyzing patterns found in historical and current transaction data as well as attitudinal survey data to predict potential future outcomes. This helps organizations to become more proactive in cutting cost, reducing risk and increasing profitability, optimizing their business and driving new forms of competitive advantage. Below figure shows how decision making is changed over the period.
“Predictive analytics helps connect data to effective action by drawing reliable conclusions about current conditions and future events. It enables organizations to make predictions and then proactively act upon that insight to drive better business outcomes and achieve measurable competitive advantage.” - Gareth Herschel, Research Director, Gartner Group
New approaches are being employed in order to take advantage of predictive analytics capabilities. Business leaders know that to meet their goals for profitability, revenue, cost reduction, and risk management, especially in the current economy; they cannot continue to operate the way they have in the past. Today’s marketplace involves an exponential increase in the number and source of customer interactions; it is now a high-volume, multi-channel game.
Through better management and use of information, business leaders can remove the blind spots that hinder informed decisions, and also achieve the next generation of efficiencies by providing precise, contextual analytics and insight at the point where these items can make a direct impact on business (point of impact). Doing so can enable micro-optimization, improving insight into patterns of customers, processes, and businesses, and deliver better real-time decisions and actions in every area of the organization.
This micro-optimization is made possible by establishing well-constructed processes and empowering individuals throughout the organization with pervasive, predictive real-time analytics. This approach can help shift from a sense-and-respond focus to a forward-looking predict-and-act focus. This approach also moves analysis from a back-office activity limited to a handful of experts to an approach that can empower everyone in the organization at the point of impact and in the context of the current situation. The result is rapid, informed, and confident decisions and actions throughout the organization, based on consistent and trusted information.
Why predictive analytics?
Eric Segel identified it very beautifully in “Seven Reasons You Need Predictive Analytics today”. Just summarizing his 7 reasons below —
1. Compete – Secure the Most Powerful and Unique Competitive Stronghold
2. Grow – Increase Sales and Retain Customers Competitively
3. Enforce – Maintain Business Integrity by Managing Fraud
4. Improve – Advance Your Core Business Capacity Competitively
5. Satisfy – Meet Today's Escalating Consumer Expectations
6. Learn – Employ Today's Most Advanced Analytics
7. Act – Render Business Intelligence and Analytics Truly Actionable
The power of predictive analytics in driving optimal outcomes and profitable revenue growth is clearly demonstrated by organizations that deploy predictive solutions. An independent financial impact study by IDC found that the median return on investment (ROI) for the projects that incorporated predictive technologies was 145%, compared with a median ROI of 89% for those projects that did not. Source: IDC Report: Predictive Analytics Yield Significant ROI - SPSS Inc. available at http://www.spss.hu/home_page/idcreport.htm
An independent assessment of SPSS customers found that 94% achieved a positive ROI with an average payback period of 10.7 months. Returns were achieved through reduced costs, increased productivity, increased employee and customer satisfaction, and greater visibility. Flexibility, performance, and price were all key factors in purchase decisions. Source: Nucleus Research Report: The Real ROI of SPSS - SPSS Inc., available at http://www.spss.com/home_page/NucleusResearch.htm
IBM offers strong capabilities in information management, reporting and analysis, and with the addition of SPSS, now can offer users predictive power that leverages both structured and unstructured data. This provides IBM SPSS users a distinct advantage as advanced analytics becomes a mainstream table stake in today’s hyper-competitive marketplace. Source: Nucleus Research Report: IBM and SPSS: Analytics for Everyone, available at http://www.spss.com/media/collateral/programs/IBM-and-SPSS-Nucleus.pdf
Where predictive analytics can help in private and public sectors?
Most commercial organizations share similar goals in private sector -
Typical application areas are as follows:
Attracting the best, most profitable customers through well-targeted campaigns.
Increasing revenues through cross- and up-sell to new and existing customers.
Reducing defection of high-quality customers and, conversely, identifying those who are costly and should be allowed to go through attrition.
Minimizing the effect of fraudulent activity by focusing the work of investigators appropriately.
Increasing customer satisfaction through faster response and processing of legitimate claims.
Building customer loyalty through effective and reliable inventory management.
Reducing operating costs by predicting maintenance needs proactively.
Typical public sector application areas are as follows:
Government agencies manage functions as diverse as tax audit selections, military force recruitment, and proactive policing and public safety.
Healthcare organizations seek to proactively manage their resources and fine-tune their practices to provide better patient care.
Colleges and universities manage the entire student life cycle more efficiently, recruiting the right mix of students, offering students a selection of programs and assistance to keep them enrolled, and managing alumni development programs with greater success.
Predictive analytics helps your organization predict with confidence what will happen next so that you can make smarter decisions and improve business outcomes. IBM offers easy-to-use predictive analytics products and solutions that meet the specific needs of different users and skill levels from beginners to experienced analysts.
With IBM SPSS predictive analytics software, you can:
Transform data into predictive insights to guide front-line decisions and interactions.
Predict what customers want and will do next to increase profitability and retention.
Maximize the productivity of your people, processes and assets.
Detect and prevent threats and fraud before they affect your organization.
Measure the social media impact of your products, services and marketing campaigns.
Perform statistical analysis including regression analysis, cluster analysis and correlation analysis.
The SPSS portfolio is designed to serve the three main phases of the analytical process, capture, predict, and act.
<>–Ability to capture attributes, interactions, behaviors, and attitudes for customers, employees or constituents
<>–Data collection capabilities for market research and feedback management
<>•Predict behavior and preferences
<>–Top down statistical analysis, useful for all data types and frequently used for survey data, delivers deeper insight
<>–Data Mining enables predictive modeling
<>–Text Analytics extracts and categorizes concepts from unstructured text, making qualitative data more quantifiable and delivering new insights
<>•Act on results
<>–Unique technology and methodology streamlines deployment of analytical results throughout the enterprise to enable better decision making
<>–Provides reliable automation of analytical processes for better orchestration & discipline
<>–Enables collaboration to deliver more effective analytical results
Can you please explain SPSS product portfolio in detail?
Here we’ll discuss the full suite of IBM SPSS Predictive Analytics software:
IBM SPSS Data Collection for capturing attitudes, preferences, and feedback [Capture]
IBM SPSS Statistics Suite for research and analysis [Predict]
IBM SPSS Modeler for predicting future behavior [Predict]
IBM SPSS Decision Management for optimizing operational decisions [Act]
IBM SPSS Collaboration and Deployment Services for enterprise-wide management of analytical assets and results [Act]
IBM SPSS Data Collection is a complete suite of products for survey, market, or business researchers. It enables you to quickly and efficiently acquire clean data from the widest range of sources by using an expansive array of methods, and actively bring data about people’s attitudes and preferences into your analytical decision-making. It is the best way to capture a complete perspective about your important constituents, making research efforts more accurate and more efficient.
This is increasingly important as business today demands faster, more representative and more cost-effective surveys for deeper insight into thoughts and opinions of customers. Both commercial organizations and market research firms rely on data collection’s advanced technologies.
IBM SPSS Modeler is a predictive analytics platform that is designed to bring predictive intelligence to decisions made by individuals, groups, systems and the enterprise. It provides a range of advanced algorithms and techniques, including text analytics, entity analytics, decision management and optimization, to help you select the actions that result in better outcomes. Available in several editions, SPSS Modeler can scale from desktop deployments to integration within operational systems. Key benefits of using IBM SPSS Modeler are as follows:
Access, prepare, and integrate structured data and text, and web and survey data.
Support the entire data mining process with a broad set of tools that are based on Cross Industry Standard Process for Data Mining (CRISP-DM) methodology.
Identify and extract sentiments from text in more than 30 languages and use this insight to build more accurate predictive models.
Deploy textual insights so your entire organization benefits from a comprehensive, 360-degree view of the people you serve.
IBM SPSS Modeler help analysts build accurate predictive models quickly and intuitively, without programming. Modeling, also known as data mining, helps organizations take seemingly unrelated data and find hidden relationships in data. Using these models, an organization can look into the future and understand what will happen in any current or future case based on what has happened before. From predicting which offer will have the most impact, to understanding and preventing churn, the modeling family helps people consistently make decisions, maximizing the results. This process repeatability makes modeling a powerful tool for embedding best practices inside the systems and processes of a business. In addition to predicting outcomes, models can explain what factors influence them so users can take advantage of opportunities and mitigate risks. Please visit http://www.ibm.com/developerworks/industry/library/ba-predictive-analytics2 for more details.
IBM SPSS Statistics uses sophisticated mathematics to help researchers validate assumptions and test hypotheses. From testing opinions about the latest product feature ideas or the viability of a political candidate, to the efficacy of a new drug treatment or prospective supply-chain allocation, statistics enables an organization to look at the beliefs of an organization and validate whether those views are based in fact. Statistics can give you confidence in the results and final outcomes of decisions you make.
SPSS Statistics provides essential statistical analysis tools for every step of the analytical process. It is used by commercial, government, and academic organizations to solve business and research problems. IBM SPSS Statistics is one of the most accessible statistics tool in the market, enabling organizations to apply mathematical discipline to their decision-making.
A comprehensive range of statistical procedures for conducting accurate analysis.
Built-in techniques to prepare data for analysis quickly and easily.
Sophisticated reporting functionality for highly effective chart creation.
Powerful visualization capabilities that clearly show the significance of your findings.
Support for all types of data including very large data sets.
The deployment family of SPSS includes the following products:
Collaboration and Deployment Services
The deployment family can help you maximize the impact of analytics in your operation by embedding the results of your analytic efforts in the hearts of enterprise systems. Deployment is about making analytics practical for the people who handle real-world challenges every day. From helping the call center agent by alerting them to the risk of a churning customer, to recommending corrective action for a failing student, the deployment family ensures that the processes of your organization operate at peak efficiency and that objectives are met.
SPSS Decision Management harnesses the power of a variety of technologies that IBM offers, such as data mining, business intelligence, rules, event processing, data management, and then blends them all together. It is a great mashup
. No longer is the business solely reliant on back-office data analysts, data miners, or expert statisticians. It facilitates the ability to create web-based, business user applications that are designed for a specific business problem. These applications help business people participate in the use of predictive analytics to meet their challenges.
Decision Management is a highly effective method for optimizing and automating your business decisions. It also helps organizations make the best decisions in real time. Supported by predictive analytics, it offers organizations the ability to move beyond reactive decisions to anticipate which actions are most likely to create successful outcomes in the future. All this technology is wrapped in a graphical user interface (GUI), using language that is familiar and meaningful to the business user.
Decision Management features:
Predictive tools and mathematical techniques to optimize transactional decisions.
Combined and integrated predictive models, rules and decision logic to deliver recommended actions.
“What if…” simulations to accommodate changing conditions based on the volume, variety and velocity of incoming data.
A flexible and intuitive user interface to support the development and implementation of targeted configurations and content.
Seamless integration with IBM Business Analytics and other software solutions.
SPSS Collaboration and Deployment Services (CD&S) lets you manage analytical assets, automate processes and efficiently share results widely and securely. Because when the people developing and the people using analytics can collaborate, your analytic efficiency increases.
SPSS CD&S capabilities can be described under these 3 headings:
Collaboration refers to the ability to share and reuse analytical assets efficiently, and is the key to developing and implementing analytics across an enterprise.
Analysts place files in C&DS repository that are made available to other analysts or business users with appropriate permissions.
The repository offers a search facility to assist users in finding assets, and backup and restore mechanism to protect the business from losing these crucial assets.
Logging features provide the ability to track file and system modifications.
Automate so you can construct flexible analytical processes that can be can be deployed throughout your operations – ensuring consistent results.
C&DS brings greater consistency to results by giving analysts the power to construct flexible, repeatable analytical processes, these analytical processes can be operationalized.
C&DS enables management to efficiently govern the analytical environments in which automated processes take place.
Analytical processes can be defined and executed in job. A job is a container for a set of steps. Each step has parameters associated with it. Before you execute a step, you must embed it within a job. Individual files stored in the repository can be included in processing jobs as job steps. Job steps can be executed sequentially or conditionally. The execution results can be stored in the repository, or on a file system. More important, the jobs themselves can be triggered according to defined time-based or message-based schedules.
Deploy by embedding analytic results in front-line business processes while integrating with your existing infrastructure with standard programming tools and interfaces.
C&DS supports application server clustering to optimize the performance of application.
Single sign-on reduces the need to manually provide credentials. Moreover, the system can be configured to be compliant with Federal Information Processing Standard for encryption (AES algorithm).
The scoring service of C&DS allows analytical results from deployed models to be delivered in real time when interacting with a customer. An analytical model configured for scoring can combine data collected from a current customer interaction with historical data to produce a score.
The deployment facilities of C&DS are designed to easily integrate with your enterprise infrastructure and other SPSS products, and built with enterprise readiness in mind.
<>2)Seven Reasons You Need Predictive Analytics Today [http://www.ibm.com/software/products/en/category/predictive-analytics]
<>3)Predicting the future, Part 1: What is predictive analytics? [http://www.ibm.com/developerworks/industry/library/ba-predictive-analytics1]
<>4)Predicting the future, Part 2: Predictive modeling techniques [http://www.ibm.com/developerworks/industry/library/ba-predictive-analytics2]
Now let’s take it one step forward. Suppose you have deployed Cognos BI in cloud environment for multiple customers by enabling multitenancy. Now the question is: How to understand BI asset/infrastructure utilization (A) for billing purpose (B) for provisioning purpose. Here in this blog we try to answer it. Now one way of answering it is to enable Auditing. To see detailed steps on how to set-up auditing in Multitenant Environment of IBM Cognos BI environment please refer my previous blog. As you can see there its purpose is to provide access to following information in the form of ready Cognos reports which can be customized and shared with customers.
<>·<>·<>·<>·<>·<>·<>·<>·So the feature (introduced in Cognos BI 10.2.1) which we are going to talk about today is not going to replace Audit feature but to complement it by providing object level details.
Creating and running content store utilization tasks
You can determine how many instances of each object type users from your tenants have in the content store and the amount of space that those instances are taking. You can also determine more detailed information, such as the size of every object.
This information can be used for billing and provisioning purposes. For example, billing decisions can be based on the instance count of particular object types, such as reports. Provisioning decisions can be made by determining which tenants should be moved to a different IBM Cognos instance because of the amount of space that they are using.
To get this information with the help of content store utilization tasks set for tenants. Once these tasks are created, you can run them on demand, at a scheduled time, or based on a trigger. The resulting .csv files can be used as data sources to create reports in IBM Cognos BI. Let’s create a utilization task for one of our two tenants – Customer – A.
1) Go to Multitenancy tab In IBM Cognos Administration and Click the “create content utilization” icon
in the tenant Actions
2) Specify the task name, and optionally a description and screen tip. For the Tenant property, click Set to select the tenant ID that you want to be associated with this task. If you do not select the tenant at this point, the task will be created with the current session tenant ID.
3) Select the tenant or tenants that you want to include in this content utilization task by using the arrows icons to move the tenants from the Available box to the Selected box.
4)In the Options section, specify how to save the information to the log files after this task is run.
a. Under File, if you select ‘One for all tenants’, the information for all tenants is saved in a single file. If you select ‘One per tenant’, the information for each tenant is saved in a separate file.
b.Under Granularity, if you select ‘By object type and tenant’, a high-level summary of information about each tenant is saved. The summary includes an instance count and the total size of each object type in the content store grouped by tenant. If you select All objects, a detailed summary of information about each object in the content store is saved. The summary includes the object tenantID, name, storeID, parentStoreID, and size.
5) To run the task now or later, click ‘Save and run once’ or ‘Save and schedule’ respectively. Creation of task is over.
6) The new task ‘UtilA’ appears on the Configuration tab, in Content Administration. You can modify or run the task from here.
The log files that result from running the content store utilization tasks are saved in the logs directory that is specified in IBM Cognos Configuration with the following names:
cmUtilization_date_stamp.csv when the One for all tenants option was used.
cmUtilization_date_stamp_tenant_ID.csv when the One per tenant option was used.
Based on the resource consumption and asset utilization data some formula for billing and provisioning can be planned and shared with customers.
IBM Cognos Business Intelligence 10.2.1 Administration and Security Guide
Hint: On Windows Cognos server you'll find it here - C:/Program Files/IBM/cognos/c10_64/webcontent/documentation/en/ug_cra.pdf
Multitenancy provides the capability to support multiple customers or organizations, called tenants, by using a single deployment of an application while ensuring that each tenant users can access only the data that they are authorized to use. Such applications are called multitenant applications. Multitenant applications minimize the extra costs associated with these environments.
IBM Cognos BI provides built-in multitenancy capabilities. It does not require you to perform additional administration tasks to manage tenants because it reuses your existing authentication infrastructure. That means even when multitenancy is enabled you continue manage your users and groups in similar way.
Determine whether you must apply the multitenancy settings to all configured namespaces or to individual namespaces. Multitenancy properties for a specific namespace override any multitenancy properties that are set globally. If a namespace is not configured to use multitenancy, then policies and permissions for objects are used to determine who can access the objects. If multitenancy is applied to multiple namespaces, the tenant IDs in all namespaces must be unique.
In our case we created two tables “USERS” & “GROUPS” in DB2 database to be used by MyJavaAuthProvider class. I’ve added few records as shown below.
Lets follow the steps to enable multitenancy with “MyJavaAuthProvider” -
Open IBM Cognos Configuration.
Choose if you want to configure multitenancy settings globally for all namespaces, or for a specific namespace.
To configure multitenancy for all namespaces, in the Explorer window, for the Security category, click Authentication.
To configure multitenancy for one namespace like in our case its “MyJavaAuthProvider”, click the namespace that you want to configure.
Under Multitenancy, click the edit button for the “Tenant ID Mapping” property. Specify one of the following properties:
Pattern - To use specific object attributes from your authentication provider, such as a TenantID, you could specify the following value for this property:
“~/parameters/tenant” in our case.
Provider class - To use a custom Java class, you only need to specify the name of the Java class that you created.
In the Explorer window, right-click Authentication, and click Test. If multitenancy is properly configured, your tenant ID is displayed in the details. If multitenancy is not properly configured, the tenant ID is not displayed. If the latter is true, ensure that the multitenancy property values are correct and test again.
I’ve also stopped anonymous access from Cognos Namespace property. From the File menu, click Save.
Restart the IBM Cognos service for the changes to take effect. You can observer message “Multi-tenancy is enabled” in Details>> as shown below.
On success service start you can see Login screen before entering Welcome page.
Tenant administration tasks are performed by members of the System Administrators role. System administrators can view and manage all objects in the content store. By default, objects created by a system administrator are tagged with his or her tenant ID. Because users who belong to the System Administrators
role have their own tenant IDs, impersonation (Impersonate Tenant) must be used when performing tasks on behalf of a specific tenant. Here in our case let ‘admin’ user and ‘administrators’ group join System Administrators role. Here are the steps.
1. Login as admin user and open Security tab from Cognos Administration screen.
2.Click on Cognos Namespace.
3.Last entry would be “System Administrators”. Open its ‘properties’, go to ‘members’ tab and add ‘admin’ user and ‘administrators’ group from ‘MyJavaAuthProvider’. Click OK.
System administrators must create a tenant in Cognos Administration before the tenant users can access Cognos server. The Multitenancy tab in IBM Cognos Administration is the central area for tenant administration. On this tab, the administrator can view and manage all tenants registered in the current Cognos environment. Lets register our tenants –
1. In IBM Cognos Administration, click the Multitenancy tab. On the toolbar, click the New Tenant icon
2. Specify the Name and Tenant ID parameters as shown below. Name can be anything but Tenant IDs should be same if you are using the same data shown in above tables.
Name: Customer – A Tenant ID: CustomerA
Name: Customer – B Tenant ID: CustomerB
If you want to update the tenant settings later, from the tenant Actions drop-down menu, click Set properties and change the settings on the General tab. For example, you can change the tenant name.
Assigning tentant IDs to existing content
After multitenancy is enabled and the tenant object is created in Cognos Administration, the system administrator assigns tenant IDs to the existing BI objects. All objects belonging to a tenant have the same tenant ID. The tenant IDs are created when a user from a specific tenant logs on to Cognos or the system administrator impersonates the tenant. Tenant IDs can also be created using the software development kit.
In a multitenant environment, all objects in the content store are either public or belong to a single tenant. As a system administrator, you must ensure that the existing objects have a proper tenant ID or are meant to remain public. For example, you can assign tenant IDs to data source connections, but leave the data source itself public.
If the tenant content is not organized into separate folders, you can create a root folder in Cognos Connection for each tenant. This helps to preserve the uniqueness of names in the Cognos BI environment. The Tenant ID is displayed on the General tab in the object properties page. The tenant name associated with each object is shown in the Tenant column in Cognos Connection and Cognos Administration.
Tenant content Deployment
You can export and import the tenant content. You can export:
Content that belongs to the selected tenants and public content
Content that belongs to the selected tenants only.
Public content only
Later, you can import the archive into the target environment. The tenant content can be imported from the deployment archive into the target environment.
When public content is excluded from the tenant export, and a tenant object has public ancestors, the public ancestors are included in the export so that the content references can be preserved in the target system. For example, in a situation where a data source connection belongs to a tenant, but the data source itself is public, the data source is exported.
In Cognos BI version 10.2.0 there was no option to exclude user account information when public content was deployed. This option exists in the product starting with version 10.2.1.
Export operation can be performed from Multitenancy tab In IBM Cognos Administration from the tenant Actions drop-down menu.
Disabling or Deleting tenants
You can disable a tenant when you want to prevent the tenant users from accessing Cognos BI and modifying the tenant content. This should typically be done before deploying a tenant and all of the tenant content. As a best practice, you should disable the tenant before terminating its active user sessions.
You can delete a tenant from Cognos BI environment. This might be needed if the tenant was permanently moved to a different or no longer required. Before deleting a tenant, you must terminate the tenant active user sessions. Otherwise, you will not be able to delete the tenant.
When you delete a tenant, you also delete all content associated with the tenant, such as reports, user profiles, or content store utilization tasks.
Both the operations can be performed from Multitenancy tab In IBM Cognos Administration from the tenant Actions drop-down menu.
IBM Cognos Business Intelligence 10.2.1 Administration and Security Guide
Hint: On Windows Cognos server you'll find it here - C:/Program Files/IBM/cognos/c10_64/webcontent/documentation/en/ug_cra.pdf
An authentication provider implements all the functionality required by Cognos to communicate with an authentication source. It includes
User authentication using external authentication sources
Trusted credentials management
Authentication provider configuration
The following diagram shows the IBM Cognos security architecture when a full authentication provider is implemented.
You may ask why Custom Authentication Provider when you have LDAP and other standard solutions? There are cases –
1) You want to integrate Cognos BI with your existing application and provide Single Sign On but you are using database to store users/groups credentials in existing application.
2)Another case where you are using your own algorithm, protocol, business rules or programming logic to authenticate users.
Developing a Custom Authentication Provider
To implement a custom authentication provider, you must have knowledge of the Java programming language. Along with Java you should also be familiar with the following:
1) XPath, the XML search language of the World-Wide Web Consortium (W3C)
2) The IBM Cognos Software Development Kit, particularly the BiBusHeader class and user authentication methods.
When you create a trusted sign-on provider, the following tasks are involved:
Creating a trusted single sign-on provider
Configuring the namespace interface
Creating a manifest for the jar file
Registering an authentication listener
Before we start with these tasks step-by-step make sure you have Cognos BI Server and Cognos SDK is installed. Few Java samples are available in the install_location/sdk/java/AuthenticationProvider directory which we’ll use here. You would also require Java Development Kit to compile .java files. For this demo I am using Cognos BI 10.2.1 and IBM DB2 10.5 on Windows 7 Professional OS environment.
Now, let’s start with
Task – 1) Creating a trusted single sign-on provider
A trusted sign-on provider is used to determine the user's identity and communicate it to the appropriate authentication namespace configured using an authentication provider supported by Cognos BI. The user's identity must be available to Cognos BI before any secured object can be accessed. With the Trusted sign-on Provider API, you can do the following:
Add or remove an environment variable
Add or remove a trusted environment variable for single sign-on
Set a cookie
Add or remove a credential
Prompt a user for information (user recoverable exception)
Prompt the system for information (system recoverable exception)
Set a different namespace by setting an environment variable
The trusted sign-on provider receives an authentication request from IBM Cognos BI and modifies it, as required, to communicate with the provider authentication namespace. For example, it reads a cookie, decrypts it, parses out the user name, and sets the trusted environment variable, such as REMOTE_USER, to the required user name. Then, it sends the authentication request back to IBM Cognos BI, specifying a full authentication namespace to process the request with the environment variable now set.
For a provider to successfully perform the trusted sign-on functionality, you must implement the INamespaceTrustedsignonProvider interface. For more details you can download “Custom Authentication Provider Developer Guide” or find it in C:/Program Files/IBM/cognos/c10_64/webcontent/documentation/en/dg_auth.pdf.
Depending on the Cognos configuration, different session information is available for single sign-on functionality. The session information available from the gateways that can be configured for IBM Cognos BI is described here:
CGI environment variables, HTTP header variables, REMOTE_USER
HTTP header variables, REMOTE_USER
Apache MOD gateway
HTTP header variables, REMOTE_USER
Servlet gateway, or servlet dispatcher
HTTP header variables, REMOTE_USER, USER_PRINCIPAL (USER_PRINCIPAL is how the gateway or dispatcher exposes the J2EE user principal name to providers)
Here we’ll go ahead with “JDBCSample” taken from <CognosInstall>\sdk\java\AuthenticationProvider\ folder. There are two versions of the JDBC provider sample. The classic version (JDBCSample.java) does not support the new session failover capabilities of IBM Cognos that were introudced in 10.2.1.
The restorable version (RestorableJDBCSample.java) can handle restoring a session after failover, and demonstrates the changes necessary to implement failover in your own custom provider so we’ll go with this version.
You may want to have a look at “RestorableJDBCSample.java”. You can also change few things here and there to customize it for your requirements. Open “dbInit_db2.sql” file to see the table structure for your authentication provider. You would find two tables “USERS” & “GROUPS” and one view. You can change DDL as you like and run this script against DB2 database. Alternatively “dbInit_sqlserver.sql” can be used for MS SQL Server.
I have created 6 users (admin, user1, user2, user3, user4, user5) and two groups (administrators, users) for demo purpose. Next, we need to set properties in “JDBC_Config_Restorable.properties” file available in configuration folder and change properties as per your settings.
Note: Currently passwords are stored in table as text but you can use encryption functions before store them to make it more secure.
Now, follow the steps mentioned in readme.txt file, doing the same here.
Change the directory
cd C:\Program Files\IBM\cognos\c10_64\sdk\java\AuthenticationProvider\MultiTenancyTenantProviderSample
Add the Java SDK to your path.
set PATH=C:\Program Files\Java\jdk1.6.0_26\bin;%PATH%
Build the sample using the command build.bat
Now you would see “CAM_AAA_JDBCSample.jar” file in the directory. Copy this jar file along DB2 JDBC 32-bit drivers to <Cognos install>/webapps/p2pd/WEB-INF/lib.
Also copy Configuration\JDBC_Config_Restorable.properties file in <Cognos install>/configuration folder. We need to rename this file with “JDBC_Config_”+ NameSpaceID(case sensitive) +”.properties”. For example – if NameSpaceID is MyJavaAuthProvider then filename would be “JDBC_Config_ MyJavaAuthProvider.properties”.
Task-2) Configuring the namespace interface
Add a “New resource -> Namespace” under Security -> Authentication in Cognos Configuration tool as shown below.
Keep “Custome Java Provider” as Type, you can choose Name of your choice. Select Ok.
Add the following entries to the properties of your authentication namespace:
Namespace ID = MyJavaAuthProvider (Remember we renamed property file using this name)
Java class name = JDBCSample
If you want to stop anonymous access then you can disable it by setting a property in ‘Cognos’ name space as shown in the picture below.
Save configuration and restart Cognos services. Select your Authentication provider and test the connection. If it works the go ahead or troubleshoot the problems unless it works.
Task – 3) Creating a Manifest for the jar File
When you write a custom authentication provider, you must produce a jar file that contains the following information in its Manifest:
Specification-Title: IBM® Cognos® Custom Provider SDK
Specification-Vendor: IBM Corp.
This part we are already covering. MANIFEST file with above content is part of JDBCSample folder but we need to remember it when we create jar separately.
Task – 4) Registering an authentication listener
The Custom Java authentication provider API includes the following authentication events:
With this Java based Custom Authentication Provider is set for your IBM Cognos BI environment. Users listed in DB table can login and you can have SSO with other applications using same provider as well.
Now you can login.
To set the security privileges for users you can go IBM Cognos Administration -> Security Tab.
In today’s data-driven landscape, the ability to analyze information to drive decision-making and solve problems is fundamental for success. IBM SPSS Statistics is commonly used tool by statisticians/analytics professionals for ad hoc analysis and hypothesis testing. Rich in Extensive data preparation, analysis and visualization features, IBM SPSS Statistics provides insight into a sample of data and tools for prediction and forecasting. Generally speaking, it helps in:
Understanding how to leverage the data collected and being able to fill in the gaps
Understanding the underlying relationships in data from the data warehouse or various repositories and how they impact decisions
Recognizing what influences business outcomes and to what degree by recognizing what variables drive outcomes
Testing assumptions before applying the theory to real world events. Also when to adjust or abandon these assumptions
IBM SPSS Statistics offers the full scope of statistical and analytical capabilities that organizations require. It’s an easy-to-use, comprehensive software solution that:
Addresses the entire analytical process from planning and data preparation to analysis, reporting and deployment
Allows application of statistical information to data to improve understanding and decision quality throughout the enterprise.
Provides advanced analysis using statistical algorithms (for regression, correlation, segmentation etc.), and display of those results.
Enterprise scale and extensibility as part of the SPSS suite of products.
Works with all common data types, external programming languages, operating systems and file types
Use with Cognos BI for statistical information in reports, dashboards and scorecards.
Here in this blog I would like to focus on last bullet which says how IBM SPSS Statistics can work with IBM Cognos BI. If you are new to Cognos BI please refer my previous blog “IBM Business Intelligence Software & Its Core Capabilities”. I’ve installed IBM SPSS Statistics 22 and IBM Cognos BI 10.2.1 on 64-bit Windows 7 Professional environment.
Reading Cognos data
Lets read Cognos BI data by choosing menu “File > Read Cognos Data”
Next, Specify the URL for the Cognos server (external dispatcher URI). Description of each field is given as under.
Cognos server URL. The URL of the IBM Cognos Business Intelligence server. This is the value of the "external dispatcher URI" environment property of IBM Cognos Configuration on the server. Contact your system administrator for more information
Mode. Select Set Credentials if you need to log in with a specific namespace, username and password (for example, as an administrator). Select Use Anonymous connection to log in with no user credentials, in which case you do not fill in the other fields.
Namespace ID. The security authentication provider used to log on to the server. The authentication provider is used to define and maintain users, groups, and roles, and to control the authentication process.
User name. Enter the user name with which to log on to the server.
Password. Enter the password associated with the specified user name.
Save as Default. Saves these settings as your default, to avoid having to re-enter them each time.
Now, specify the location of the data package or report as shown below.
And select the data fields or report that you want to read.
Optionally, you can:
Select filters for data packages.
Import aggregated data instead of raw data.
Specify parameter values.
Once you Run and finish it, you can see the data in Statistics Data Editor. Now you are ready to work & analyze your data using SPSS statistical features.
Export Cognos Active Report
Once you are done with your analysis and ready with output, you may want to export it. Export Output saves Viewer output in HTML, text, Word/RTF, Excel, PowerPoint (requires PowerPoint 97 or later), and PDF formats. Charts can also be exported in a number of different graphics formats.
Note: Export to PowerPoint is available only on Windows operating systems and is not available in the Student Version.
To Export Output
Make the Viewer the active window (click anywhere in the window).
From the menus choose: File > Export...
Enter a filename (or prefix for charts) and select “Web Report (*.htm or *.mht)” as export type as shown below.
Now click on “Change Options” button
This Cognos Active report is now ready and users can easily export their output with others to view & interact on smart devices (iPhone, iPad, iPod, Android and Windows 8)
Cognos Active reports enables quick decision-making anytime and anywhere. Also users can take advantage of IBM Cognos Mobile app (available on iPad) by exporting their output as Cognos Active Report (.mht format).
IBM Cognos Active Report provides an interactive analytics experience in a self-contained Cognos Business Intelligence application for browsing and exploring data offline. Report authors can build reports targeted to users needs, keeping the user experience simple and engaging. Mobile workers can take their data with them to discover opportunities and analyze trends even when they are nowhere near a network.
Now, whatever steps we just followed above generates scripts which can be used to automate entire job. The scripting facility allows you to automate tasks, including:
Opening and saving data files.
Exporting charts as graphic files in a variety of formats.
Customizing output in the Viewer.
IBM SPSS Statistics 22 product documentation is available at:
Join SPSS Community
Cognos Active Report