C^3. C to the power of 3. Cognos. Community. Collaboration.
NickMann 270001QA2W Marcações:  report+output burst batching cognos event event+studio email 23 Comentários 17.580 Visualizações
Bursting is the process of running a report once and dividing the output into subsets for distribution. Each recipient recieves only the part of the data that is relevant to them.
You can create a sales report that bursts its output by sales territory and usisng Event Studio you can email the appropriate section to each sales manager.
NickMann 270001QA2W Marcações:  annotation commentary deployment hts business insight 12 Comentários 6.950 Visualizações
In Cognos 10 both the Business insight commentary and human tasks are stored in relational datastores configurable from the cogconfig tool.
The content of the two databases can be exported to and imported from file using the deploytool that comes with the hts service. The advantage of deployment this way is that it will provide an upgrade path in the next release.
simply open a shell or cmd prompt in the bin directory with the server running for authentication and type
htsDeployTool -exportFile MyFile1 -camNamespace myNamespace -camUsername nick_mann -camPassword mySecretPassword1
this will by default export your MyInbox tasks to an zipped unencrypted xml file called MyFile1.xml.gz and you should see the stout line
"new deployment file at <install_dir>\bin\..\deployment\MyFile1.xml.gz" followed by some trace about what was deployed.
To add encryption to the file simply add -password myDeploymentPassword
To deploy the annotation service content add -persistenceUnit annotations
To import the tasks to an empty db simply type
htsDeployTool -importFile MyFile1 -camNamespace myNamespace -camUsername nick_mann -camPassword mySecretPassword1
this time you will recieve a lot more trace about what was imported and existing object clash avoidance
2. import unencrypted humanTasks
NickMann 270001QA2W Marcações:  integration service intelligence web event cognos actionable studio 8 Comentários 14.413 Visualizações
Hi I'm Nick Mann from the Event Management Framework team and I want to talk about something we've been working on recently, a powerful way to integrate Cognos with external applications via web services using Event Studio.
What is Event Studio?
Event Studio is a tool for administrators to design an agent that allows you to conditionally harvest BI data and act on it. One of the actions or tasks that you can perform with the data is calling an external web service.
External web service
First, create a web service or identify a web service that you wish to call. If you want an example and have eclipse, here is a good tutorial. If you are going to reference an external web service remember that the domain has to be in the valid domains list in the configuration tool.
Create an Agent in Event Studio.
The condition filter can be a simple 1=1 and add a web service task and paste in the WSDL URL for the web service. i.e. from the above example http://localhost:5753/axis_test/services/Converter?wsdl. Now internally event studio will use WSIF to parse the WSDL received from the URL and attempt to present the web service arguments to you as a list. At this point you can choose to enter literal data or use the data gathered by the event. Save the agent and then run it and you're done. See the Event Studio User Guide and samples for more guidance.
If your web service takes complex types, for example if the web service is "wrapped", then the complex type information must be available to WSIF. There is a buildws script available in Cognos bin directory that will process the WSDL URL and build the appropriate support classes. See the Event Studio User Guide for more information.
Event Studio help
**UPDATE** Include sample code
In the previous article, I showed you how to present the Cognos logon window to authenticate against the Cognos BI server and run a simple report via CMS. However, for both aesthetic and technical reasons its often required to integrate the logon interface directly into your mashup application.
CMS provides the authentication service to accomplish this task. In the example below, a simple HTML form is used to ask the user for their user ID and password before running the report.
In this scenario, we have a single namespace in Cognos 8 BI such that the credential consists of a user name and password. Using this information, we can construct a credential XML document that we can send to the logon method of the CMS Authentication API.
The credential XML consists of a set of credential elements. The first piece, CAMNamespace tells CMS to use the “kdap” namespace, which is hard-coded for simplicity of this example. To authenticate with the server, we just need to grab the values the user typed in for user name and password and set CAMUserName and CAMPassword respectively.
Note that this is done as a POST request instead of a GET request, which is important for security reasons. Anything on the URL can be seen over the wire, even if the request is to an HTTPS server. By sending the credential in a POST' message body, it can be encrypted and sent securely.
What happens when there might be multiple namespaces? Fortunately, we can extend the sample to cover any possible authentication provider and generate an appropriate UI dynamically. My server now contains two namespaces, one called bluepages and one called LDAP. With the CMS authentication API, I can now present a drop down list to let the user pick the correct namespace.
Once the user has picked the correct namespace, then the sample needs to prompt for the missing pieces of information that a user needs to complete the authentication process.
To accomplish this, we need to generate only the UI that's required. By sending an empty <crendential/> logon request to the server, we will get a response from the CMS authentication service indicating what additional credential elements are required, at which point we will prompt the user to supply the required information. Once a full credential has been built, the CMS authentication service will respond with the user's logon display name and account ID. This technique allows us to handle any type of credential format that the underlying authentication provider requires.
Once we've received the response from the server, we need to do some processing on the client end to determine if logon is complete or we need to prompt for more information. We first look to see if an “accountInfo” object has been returned. If this is the case, authentication is complete and we can run the report.
If we didn't receive an account information response, then we need to start prompting the user. We start by looking at each credentialElement returned by the server. The element may contain a label such as “Namespace:” which is used to indicate to the user what they need to provide. The element may also contain an “actualValue”, which is a piece of the credential we need to maintain, but we don't need to ask the user for. For example, if we have already run through the namespace dropdown, it's not necessary to prompt for the namespace again when asking for the user ID and password. If we have an actualValue but no corresponding label, we do not need to present the user with this information in the UI.
Alternatively, a missingValue element indicates what further information is required to complete the credential. In some cases, we might receive an enumeration of possible values, others might be of type “textnoecho” which tells us to create a password type-in box in our UI.
While generating your own UI requires a bit more code than reusing the Cognos logon dialog, it provides for much greater flexibility and can work in a proxied environment where the user cannot access the Cognos BI server directly.
In the next article, I'll discuss single signon and the techniques that can be used to support it in your REST CMS application.
Hello and welcome to this IBM Cognos developer community, C to the Power of 3!
C3 (C to the Power of 3) stands for Cognos, Community and Collaboration and that's exactly why this community is here. It has been created to share ideas and information about developing applications using IBM Cognos 8 Business Intelligence products.
IBM Cognos 8 provides several application programming interfaces that can be used to combine your Cognos8 BI into other applications or tools that are important to your business. Exploring the "art of the possible", is what this developer community is all about.
My name is Wade Williams, and I am the leader of the IBM Cognos Mashup Service development team. CMS is a relatively new feature of IBM Cognos, available since the 8.4.1 release, and adds an important API to access the great content created using IBM Cognos. I will be blogging about CMS in the weeks and months ahead.
I am joined by members of various IBM Cognos teams, across many job roles, and together we have a lot of experience with building applications using IBM Cognos. We're excited about what can be accomplished with our tools, and want to share some ideas and techniques to help you build really great applications that include your BI.
Some of you joined this community when it was first created with the name "IBM Cognos 8 Mashup Service". We decided to expand the scope of this community beyond CMS, so we needed a new name. My team and I will make sure that there will be lots of interesting CMS content available.
There are other forums to talk about all the great things that can be done using the IBM Cognos 8 studios, and Go! Dashboards. We'll stick to what we know, which is the APIs. Those APIs will let us use all that great content created using the studio authoring tools in new and innovative ways.
So please join us in this community and look for regular postings that showcase ideas and techniques for using IBM Cognos 8 right where you need it most.
This is a brief overview on how to debug a custom security provider using Eclipse. The techniques used to debug a custom security provider are no different than debugging a normal remote java application. These steps can be found in many sites on the web. The appendix A contains an example of how to do this.
There are a couple of ways to debug any cognos code. One method is to start cognos via the command prompt using startup.bat and then put print messages in your code. As the code gets invoked you will see the results in the command window. Another way is to use an IDE such as eclipse or Intellj to debug the code. There are many advantages of debugging via an IDE but one of the biggest is that you can step through the code one line at a time inspecting any variable vs knowing ahead of time which variable you want to print. This document identifies how this can be done.
Here are the steps necessary to debug using eclipse.
1.) Allow cognos server to be run in debug mode.
The first step is to modify the bootstrap_win32.xml file located in the bin directory to allow for debugging. You will see a section (<!-- uncomment these for debug -->) that needs to be uncommented. Notice the default port in this file is 9091. If you are already using that port or your network is blocking it then you should modify it to a port that is allowed. Once this change is made restart cognos. Note: On windows you must restart cognos via the service mode in order to get this to work.
2.) Create an Eclipse project with your custom security provider code.
Make sure it compiles clean
3.) Create a jar file that contains all the class files for debugging.
In order for your code to be debugged it is convenient to place all the class files in a single jar file which you can point eclipse to when debugging.
4.) Set up debugging in eclipse
Open eclipse and perform the steps identified above in the Appendix A titled: Configuring Eclipse to Debug a Remotely Running Application. For the project pick the project you created in step 2.
5.) Set break points in your source code identified above in step 2
In my case I put break points in the logon method, logout method and the searchobject method of the class file called “SingleTenantProvider” which extends Namespace implements INamespaceAuthenticationProvider2
6.) Logon to cognos, with your browser, picking your csp as namespace
You should then hit a break point if you put one in the logon method or the search method.
You might get an error message indicating cannot find source for class file. If so then configure it to point to the jar file build in step 3.There is more detail provided in the debug_csp_eclipse.pdf document.
All of the information needed to perform the drill up/down is available in CMS's layoutDataXML format. A chart object in LDX consists of a URL to the static image, a table with the chart's underlying data, and a section called regions which contains the polygon information to display tooltips and to build an image map to perform actions when a specific area of the chart is clicked.
When we click on a pie slice in our CMS application, we'd like to open up the target report in a pop-up window, and at the same time show the user what they clicked on in the main window.
var reportPart = eval('(' + jsonText + ')');
var chart = reportPart.filterResultSet.filterResult.reportElement.cht;
// The Chart will contain a URL to the chart image, but to make it dynamic we'll also
// construct an Image map to handle drilling down.
var imageHTML = "<IMG border='0' src='" + gateway + chart.url + "' usemap='#chartMap'/>";
imageHTML += "<map name='chartmap'>";
We begin by taking the url property of the chart and creating an HTML image element. However, for the chart to do anything dynamic we also need to create a map. To do this, we need to create an HTML area, that corresponds with the area object of the chart. Chart areas from CMS always correspond to the “poly” type shape in HTML, so we simply append all the x and y coordinates that make up each polygon.
var areas = chart.regions.area;
for (var i=0; i < areas.length; i++)
// If there's no drill through target, don't bother adding it to the map
// Use the "poly" area type, as the shape of the area may be non-rectangular.
imageHTML += "<area shape='poly' coords='";
var coords = areas[i].coord;
for (var j=0; j < coords.length; j++)
if ( j != 0)
imageHTML += ",";
imageHTML += areas[i].coord[j].x;
imageHTML += ",";
imageHTML += areas[i].coord[j].y;
Finally, we need to actually do something when the area is clicked:
var drillDef = areas[i].drills.drill;
drillParam = drillDef.parm.name;
drillValue = drillDef.parm.value;
drillTarget = reportPart.filterResultSet.drillDefinitions.drill.targetPath;
imageHTML += " alt='"
imageHTML += areas[i].label;
imageHTML += "' title='";
imageHTML += areas[i].label;
imageHTML += "'/>";
A drill through is actually running another report while passing parameters from the first report. The drillDefinitions section contains the search path of the target report, while the area contains the parameter(s) that we are answering, and the value(s) that the chart area corresponds to. We now have a complete image map that should respond to our click events, see the full sample here.
With a bit of minor modification, we can also handle drill down events. The code to build up the image map is virtually identical to the above example with a minor exception. Instead of looking at drill definitions, we need to look at drill context:
drillCtx = areas[i].ctx;
if (areas[i].measure && areas[i].member)
drillCtx = areas[i].measure.ctx + ":" + areas[i].member.ctx;
// Finally, need to handle the drill down by defining the link to use our loadReportURL
With the 10.1 version of IBM Cognos, the context value we need is directly under the chart area, but in 8.4.1 we need to put two pieces of information together. Fortunately, we can make this work in both versions of Cognos with the code above. This example hard-codes the direction as DOWN, but it could easily be extended to drill up as well. You can view the complete sample here.
As you can see, making a drillable chart is fairly straight forward, and gives you flexibility in your application to handle drill events as you see fit. Since you know where and what the user is clicking on, you can use this information to update both the Cognos chart, and your own application as well.
Wade and Mark have shared some ideas, in earlier blog entries, on using Cognos Mashup Service (CMS) to access report output and to mash it up in various ways. Besides the actual report output, CMS makes other resources related to reports available as well. Included in these, is a set of resources associated with the prompting for the reports.
A common request we’ve seen from several customers, is the desire to be able to share prompt answers, for several reports. The idea is that they have an application which will display several Cognos reports, all of which share a common set of prompts. (“Application”, in this case, could mean a C#/Java client, a SharePoint page, a web page, etc.) A common use of this pattern, is to establish a context that the application user is interested in (such as sales region, and a date range), and to use this context as prompt/filters on all subsequent reporting. Instead of prompting the user several times for each report, the application prompts just once, and then uses these prompt answers to set the context, for subsequent running of all the reports. The CMS resources promptPage and promptAnswers support this pattern, by allowing the application writer to decouple the prompting for a report, from the running of the report(s).
Cognos provides a powerful prompting facility, which include features such as calendar controls, drop down selections, cascading prompts, etc. As well, report authors can further design and customize the prompt pages for their reports. CMS provides access to these prompt pages, independent of actually running the reports, through the promptPage resource. As an example, to get the prompt page for a report at Public Folders > Reports > Inventory Report, the REST URL request for this resource is:
This resource returns an XML document with two elements:
Here’s an example of this prompt page XML document:
The application can now open a browser window (or IFRAME) with the source set to this HTML prompt page url, to ask the user for the prompt response (without immediately having to run the report(s)).
Once the user has specified all the prompt values, and clicked the “Finish” button, the chosen prompt answers are saved on the Cognos server. To retrieve these answers, the application can use the CMS promptAnswers resource (and the promptID from the prompt page XML document):
(Note: the id after “conversationID/” for this resource, is the “promptID” element returned in the prompt page XML document). The promptAnswers resource is an XML document describing all the prompt answers selected:
The application can then use these prompt answers, when running the report, by providing the “xmlData” query parameter, which is set to the value of this promptAnswer XML, on the request:
http://localhost/cognos8/cgi-bin/cognos.cgi/rds/reportData/path/Public%20Folders/Reports/Inventory%20Report?xmlData=<rds:promptAnswers xmlns:rds="http://developer.cognos.com/schemas/rds/types/2"> <rds:promptValues><rds:name>region . . .
Besides the Inventory Report, for which these prompts were originally collected, these same prompt answers can be used for running any of the other reports with the same prompts (using the xmlData query parameter). For example, for the Sales Report:
http://localhost/cognos8/cgi-bin/cognos.cgi/rds/reportData/path/Public%20Folders/Reports/Sales%20Report?xmlData=<rds:promptAnswers xmlns:rds="http://developer.cognos.com/schemas/rds/types/2"> <rds:promptValues><rds:name>region . . .
The decoupled set of prompt-related resources available from CMS, allow flexibility to fit many different application requirements related to prompting. Other variants of this idea include:
The CMS samples which are included in the Cognos SDK installation, include examples of working with promptPage and promptAnswer resources.
The syntax for the cms rest call is as follows:
Here is an example of the orgchart and treemap using data from cms.
The treemapcms.html file in the “create” directory contains a treemap and an orgchart using data from c8. It is leveraging the google visualization api with extensions.
This example includes an xml file that can be used as a datasource. It is located in the data directory. The content folder contains a very simple fm model. The content directory contains the reports used in this example. There is a report that uses the included fm model and another one that uses the go sales fm model. The files in the web directory can be placed anywhere but usually best practice to create a virtual directory. This will also work with the gosales data as well.
I would like to discuss a simple example that uses IBM Cognos Mashup Service (CMS) to create a Mashup of my IBM Cognos reports, and the ubiquitous web application used in Mashups, Google Maps. This Mashup demonstrates how easy it is to integrate IBM Cognos content into any web application. Mashing up IBM Cognos business intelligence (BI) with other applications can provide a more useful user experience, putting the BI within the context and user interface of another application, right at the “point of impact”, and avoids the need to switch to another application to make fully informed decisions.
Watch the video of the Mashup in action here.
There are a few points worth noting in this demonstration. First, the Mashup uses CMS to access the “Branches” report resource in XML form, so that the data values are readily available to pass to the Google maps API’s for geocoding. This is robust, and avoids fragile, workaround techniques like HTML scraping. The XML format is called layoutDataXML (LDX), and is fully documented in the CMS Developer Guide. Second, the “Branch Details” report resource is requested in HTML fragment form, so CMS transforms the LDX into a HTML fragment, suitable for insertion into a <div> element of an HTML page, before returning the resource. Finally the Mashup uses another resource type, the cognosURL resource to leave the Mashup application and launch another web browser to open the same “Branch Details” report in the full Cognos Viewer user interface.
I like how easy it is to make requests for BI resources to include in the Mashup application. Resources are also available in JSON format from CMS.
In this video, I explain the construction of the REST URL references to IBM Cognos report objects.
I hope that this example helps you to think of ways that your applications can integrate with IBM Cognos to deliver business intelligence right where it is needed most. I look forward to seeing other applications that apply IBM Cognos Mashup Service.
If you would like to see the complete code for this example, you can download the html file with this code here. This sample was created using IBM Cognos 8.4.1. The reports are written using Report Studio with the “Go Data Warehouse (analysis)” sample package.
For information on how to use the Google maps API please visit http://code.google.com/intl/en/apis/maps/
NickMann 270001QA2W Marcações:  annotations commentry annotaion reporting 2 Comentários 3.599 Visualizações
Reporting from annotation datastore in Cognos 10
I have provided a zip file with a framework manager model to report against the annotation datastore, I've also provided a deployment with a package and a sample report.
Annotation reporting resources
(see readme below)
Here is a screenshot of the resulting sample report, as you can see there is an issue with the owner (who left the comment) and the parentId (report store id). I have been unable to resolve them to their readable values within the model itself.
For those interested I have raised an enhancement request to enable the annotation service to store default names of the owner and parent report. Any further enhancements to this report that enables searching by report or user I will post to this blog.
1. annotaion_model is the framework manager model. Datasource is called ANNOTATION_DB. This should be pointing to the location of your annotation content store
see this doc note for the location - http://www-01.ibm.com/support/docview.wss?uid=swg21456985
2. sample package and report. move the file annotation_package_and_sample_report.zip to the deployment folder
run the import from cog admin and update the datasource to point to the annotation content store (see above) and add a signon
3. optionally deploy some annotations to the annotation store. alternative is to create and annotate a report of your own
move the file annotation_db_content.xml.gz to the deployment folder
open a command shell in <install>\bin
execute the following command: htsDeployTool -importFile annotation_db_content -persistenceUnit annotations
add the relevant cam arguments from the reference below if you need to logon
full reference here
When it comes to writing any application, performance is always a consideration. There are a lot of factors to consider:
There is no one-size-fits-all technique, as different scenarios require different approaches. For example, a user of an Excel integration who expects to import 25000 rows of data into a spreadsheet has a different expectation than a user in a CRM system who hovers over an item and expects a pie chart from Cognos to appear in a pop-up window. Both users want their application to be high performing, but the Excel user values the volume of data over instant responses.
With CMS 10.1, there are several new options to help apply the best technique to the application at hand:
These features can be combined with existing features from previous versions of CMS:
Caching of report runs
CMS now automatically caches report runs. This allows you to ask for the report again without hitting the database. For example, in REST you can now do this
Due to the structured nature of SOAP, you can not change the format of the cached output. However, you can reuse the same session information to form the REST URL to retrieve alternate output formats without rerunning the report.
Interactive PagedReportData requests
Previous versions of CMS returned the entire report in a single request. This technique is generally preferable for data mashups such as alternate visualizations where all the data is required to represent a true picture. However, for many integration scenarios a more Viewer-like experience with quick response times is required. With CMS 10.1 you now have the option of running report using the “pagedReportData” request. In this mode, CMS will return output a page at a time, and like viewer you can move back and forth through the document and perform drill up/down/through while still having access to all the regular CMS features such as the various output formats and report part selection.
LDX contains all
of the data and formatting for a report, which makes it a very rich
but very verbose output. For data mashup scenearios, CMS has
introduced two new output formats called DataSet and DataSetAtom (not
to be confused with the XML output generated by Report Server). These
two formats return just the raw values for lists, crosstabs, and
charts which makes it easy to perform data mashups without parsing a
large XML document.
<?xml version="1.0" encoding="UTF-8"?>
Selection Filter Query Optimization
When selecting a single report part, CMS now optimizes the query so that additional queries that are not part of the selection are excluded, resulting in a significant performance gain for multi-query reports.
Saved report runs
With “enable enhanced features” on the report properties turned on, you can get enormous performance gains by scheduling report runs during periods of low activity and using the saved output instead for your mashup.
Synchronous Report Runs
By default, CMS will run reports in asynchronous mode that provides allowances for long running reports and secondary operations such as drilling and paging. However, if you know that you will not need to make any secondary requests, you can specify that the report runs synchronously in REST. This removes a small bit of overhead required to create a session to manage the report's lifecycle.
If you know that you are not going to make any additional requests to the report and are running the report asynchronously, you should release the report session using the release command. This frees up resources on the server and allows a higher volume of requests to be processed.
Performance is one of the most challenging considerations any application developer faces. The new features in 10.1 will help make your life as a CMS developer easier in tackling it.
NickMann 270001QA2W Marcações:  nc_drop emf db batching notification pending current activities 1 Comentário 8.975 Visualizações
IBM Cognos BI supports batch processing for a variety of tasks and objects. A common example would be a scheduled job containing report executions.
The monitor service stores the job and report execution commands in a Task Queue. Commands on the task queue are run as the system has capacity to do so.
There is a balance to be struck between the Cognos installation size and the rate at which background processing occurs, or put another way your installation hardware will have a maximum capacity for background processing that you cannot exceed.
You will notice that if your scheduled execution rate exceeds your system capacity then the Task Queue will start to fill up. If you open the current activities view in Cognos Administration, it will have many Pending report executions.
Stopping the scheduled activities using the upcoming activities view is one way to solve this issue and let the backlog of report be processed.
Another way that some customers have chosen to use is NC_DROP. This involves dropping all of the tables in the notification database that are prefixed NC_ When the system restarts, the tables are rebuilt and repopulated.
This has the drawback however that all of the schedules are re-synchronized to the NC tables which can take some hours on a big system.
A recommended alternative is to stop the server and use the following SQL script to truncate the task queue tables. The schedule data in the NC tables is left alone but the current activities view is now empty on restart. Beware that report executions that were pending will not now execute.
TRUNCATE TABLE NC_TASK_HISTORY_DETAIL;
TRUNCATE TABLE NC_TASK_PROPERTY;
TRUNCATE TABLE NC_TASK_QUEUE;
TRUNCATE TABLE NC_TSE_STATE_MAP;
sql script for current activity queue truncation in cognos 10
NickMann 270001QA2W Marcações:  myinbox watermark report approval workflow 1 Comentário 5.737 Visualizações
Report approval process
With Cognos 10, using Event Studio and MyInbox you now have the capability to do report approval.Here is how to create a report approval process in Cognos 10.
Watermarking approved reports Approved reports could be output to a secure location and then distributed only when approved, but its good to know that a report has actually been approved.
While Cognos does not yet support explicit watermarking in its report output it is possible to enhance the data model to include approval status, then once approved the event studio sql writeback task could update the database and mark that report as approved before distribution.
redwings 2700026VHC Marcações:  cognos architecture availability high 1 Comentário 8.456 Visualizações
This article identifies a possible design option for implementing high availability. The main design goal for implementing high availability is 100% up time which typically requires redundant hardware and software to manage the environment. This article does not take into consideration disaster recovery plans which introduce other complexities about location of the redundant equipment. This article does not address human errors which often contribute to system failures. This article assumes that only two physical servers are available. Additional comment s will be made to discuss what can be done to improve performance if more than two servers are available.
As indicated above one of the most common methods to ensure high availability is to introduce redundancy into the overall architecture of the implementation. Out of the Box Cognos provides high availability within the application servers in the case of failure but in order to provide complete failover other hardware and software capabilities can be introduced to provide additional assurance. Listed below is a description on the various software and hardware components that make up a production ready high availability solution.
This design pattern is based on a modular design. Each physical server in the module is installed with the same software stack. In this example the software components include IBM Cognos BI Server , IBM WebSphere® Application Server, IBM HTTP Server, IBM DB2 Enterprise Server Edition, and IBM Tivoli® System Automation for Multiplatforms (Tivoli SA MP).
Each server in the module is installed with the same Cognos components but all of the components will not necessarily be turned on for each server. The following is a review of the cognos components and then a description of how they are implemented for high availability with the concept of nodes.
Gateway: This component receives requests from users, encrypts passwords, gathers the information needed to submit the request to a Cognos server, and passes the request to a dispatcher for processing. For high availability Tivoli SA MP software is used to implement a virtual service IP address that is referenced by users to connect to the gateway on the corporate network. A resource group is created and managed by Tivoli to detect if either the http server or the machine itself is not accessible.
Report service: This service manages report requests and presents output to users through a Web portal called Cognos Connection
Content Manager: This service manages the storage of application data such as security, configuration data, models, metrics, report specifications, and report output. It is needed to publish packages, retrieve or store report specifications, manage scheduling information, and manage the Cognos namespace
For high availability we will introduce the idea of a BI module. I t represents a combination of cognos components, high availability software and or hardware. The BI module must contain one BI type 1 node and one BI type 2 node, and it can contain between zero and either two or four BI extension nodes.
With this design we also have introduced a concept called tier. Tiers are a logical grouping of services. In our case the gateway tier is the http service, application tier is the report processing tier and in this case the database engine as well. The data tier is a high speed network mounted file system that contains the databases used by the application tier.
The three types of nodes in a BI module follow:
BI type 1 node: The primary role of this node is to handle user requests through the Cognos gateway and to process report requests. The number of reports processed is based on the weight associated with the Cognos dispatcher and depends on the number of BI extension nodes in the module. This node also hosts a standby Cognos Content Manager and provides failover support for the content store. Every BI module includes one type 1 node.
BI type 2 node: The primary role of this node is to host the active Cognos Content Manager, to host the content store database and the audit database, and to process report requests. The number of reports processed is based on the weight associated with the Cognos dispatcher and depends on the number of BI extension nodes in the module. The gateway on this node provides failover support for the gateway on the type 1 node. Every BI module includes one type 2 node.
BI extension node: An optional node introduced if higher performance is required. The primary role of this type of node is report processing, and it is configured to have a maximum report processing capacity. The gateway is installed but not used, and the Content Manager is installed but not started. Every BI module includes between zero and either two or four extension nodes. As you add more extension nodes to the module, a higher number of users can generate reports concurrently A mutual failover high availability configuration is implemented for the BI module using native Cognos BI Server functionality to manage the active Cognos Content Manager and Tivoli SA MP to manage high availability resource groups for the Cognos gateway and BI module database resources.
The primary role of the type 1 node is to handle user requests through the Cognos gateway and to process report requests. The resource group for the Cognos gateway is initially assigned to this node, and its Content Manager is in a standby state. If the server fails or becomes unable to connect to the corporate network, the Tivoli SA MP software detects that the type 1 node is unavailable and automatically performs the failover. The failover and subsequent failback follows this sequence of events:
Failover scenarios: A mutual failover high availability configuration is implemented for the BI module using native Cognos BI Server functionality to manage the active Cognos Content Manager andTivoli SA MP is used to manage high availability resource groups for the Cognos gateway and BI module database resources.
If the type 1node fails, the Tivoli SA MP software detects the failure and transfers the Cognos gateway resources to the type 2 node. If the type 2 node fails, the Cognos BI Server application detects the failure and designates the type 1 node as the active Content Manager. Tivoli SA MP also detects the failure and transfers the BI module database resources to the type 1 node. If an extension node fails, no failover is needed because report processing is available in the other servers.
Both file systems mounted in the BI module are located on the external storage allocated to the BI module. The /coghome and /cogfs file systems contain the DB2 instance that manages the Cognos content store and audit databases, and both are mounted on the type 2 node during normal operations. If the type 2 node fails over, the type 1 node mounts the /coghome and /cogfs file systems and manages the active content store and audit databases .
The figure below shows the minimum configuration possible, in which there is one type 1 node, one type 2 node, and no extension nodes. As this diagram shows, the gateway service IP is assigned to the type 1 node, and therefore user requests are routed to the gateway on the type 1 node. The Content Manager on the type 2 node is active. The Content Manager on the type 1 node is in a standby state, and polls the active Content Manager at a regular interval to determine whether the active Content Manager is still available. The report service is active on both nodes.
Design considerations for improving performance
As mentioned above introducing the BI extension node into the architecture will allow for improved performance for the following reasons. Disabling the Report Service on the type 2 node would be a more optimal configuration since the content store process consumes both cpu and memory as the workload increases. The BI extension node could then share report requests with the type 1 node instead of the type 2 node. The fail over scenarios would work the same as above, except for the fact that the BI Extension node will be handling the Report Request if the type 1 node fails.
A hardware load balancer could also be introduced in front of the gateway servers so multiple gateway servers can process the http requests. To obtain better performance the data tier should also include the dbms and the content manager server process as well. Stay tuned for another post on design options for performance. For more details on this topic please refer to links below.
Note: Most of this material was obtained from the IBM Smart Analytics Business Intelligence Module Guide (LC27-3637-00).
For more information on these kinds of architectures please refer to Administration and Security 10.1.0 at http://publib.boulder.ibm.com/infocenter/cbi/v10r1m0/index.jsp
And IBM Smart Analytics Business Intelligence Module Guide at: