Blog

What's happening? What's new? What can I do? Find answers to these questions in the blog.

Archive Results

Blog

New Base Samples for IBM #Cognos Analytics 11.0.10

There are two types of samples that have been created exclusively for IBM Cognos Analytics. The Extended Samples require the use of a database and database connections to host the sample Great Outdoors data. They must be installed and configured. Here are the instructions for installing the Extended samples. We have also created a set of Base Samples in a deployment that can be easily imported in one step. These samples exclusively use Data Modules as their underlying data sources (no cubes, databases, or packages). Here is a video that explains how to find and import the Samples_for_Install_11_0_10.zip deployment file. After you import the deployment, you will see the following new Base Samples: Team content > Samples > Dashboards California website visits This sample showcases the new latitude/longitude mapping functionality in dashboarding. Multiple maps use regions, points, and latitude/longitude to display California website visits data for the fictional Samples Outdoors Company website. Policy analysis This sample dashboard has been updated to include a drill-through definition to the "Customer lifetime value analysis" report. Team content > Samples > Data NYPD motor vehicle collisions This sample data provides a breakdown of every collision in NYC from 2015 to 2017 by location and injury. Each record represents a collision in NYC by city, borough, precinct and cross street. Source: Police Department (NYPD), NYC OpenData Sales staff This sample data contains sales staff information for the fictional Sample Outdoors Company. Source: IBM. Note: This uploaded file is now included in the Sample data module. Audit Samples We have added one new sample audit report in 11.0.10: Deleted user account report This sample audit report lists user accounts that have been deleted during a specified time period. It provides information about the deleted user ID, the time deleted, the account type, and the status of the request. To learn more about the audit samples and how to install them, please visit this Blog post. JavaScript Samples All of the JavaScript samples now use data modules as their data source, rather than the Sample Outdoors Company database. Please visit this Blog post for more details. Related posts: New Audit samples for IBM #Cognos Analytics 11.0.7+ Updated JavaScript samples for IBM #Cognos Analytics 11.0.10 Samples landing page Supplementary (Legacy) IBM Cognos Analytics 11 samples Guide to IBM Cognos Analytics sample data sets Please visit our IBM Business Analytics Support channel on YouTube.

Blog

How to Update an Existing Uploaded File in IBM #Cognos Analytics

Suppose I have already uploaded a file to IBM Cognos Analytics. If I make any of the following changes to it, can I re-upload (update) it successfully? Related posts: What types of files can be uploaded into Cognos Analytics? Uploading data Using an uploaded file source Creating a data module Samples Landing Page Library of How-To Videos for IBM Cognos Analytics Guide to IBM Cognos Analytics Sample Data Sets Please visit our IBM Business Analytics Support channel on YouTube.

Blog

Data server updates in #Cognos Analytics 11.0.8

Amazon Athena Cognos Analytics 11.0.8 introduces support for Amazon Athena, a SQL interface for data files stored in Amazon S3 storage.  Athena was built on Presto, an open source distributed SQL query engine which is also supported by Cognos Analytics as of version 11.0.7. Spark SQL Cognos Analytics 11.0.8 also adds support for Spark SQL via the Simba Technologies JDBC driver. Spark SQL (Thrift Server) is a service that can receive SQL queries via JDBC and in turn generates Spark requests which are executed by a cluster of Spark executors. Azure SQL Data Warehouse Even though many Cognos Analytics customers are already using Azure SQL Data Warehouse due it being a variant of Microsoft SQL Server and Azure SQL Database, it is now officially supported just like those others!

Blog

Guide to IBM #Cognos Analytics Sample Data Sets

You can start getting familiar with IBM Cognos Analytics by using the sample data sets provided in this community. These data sets have all been tested with IBM Cognos Analytics, and are the basis for many of the samples and videos. A description of each is below. To use these data sets: Download a file from the links below. Log in to IBM Cognos Analytics. On the home page, click New and select Upload Files in the left navigation bar to browse and select the data file. American time use The American Time Use Survey data contains information about the amount of time (in minutes per day) people spend on activities such as paid work, volunteering, childcare, and socializing. The survey question asked was: "Now I'd like to find out how you spent your time yesterday, from 4:00 in the morning until 4:00 AM this morning." The previous day was always a weekday. Source: Bureau of Labor Statistics. Banking loss events The banking loss events data contains seven years of loss events including net loss, recovery amount, region, and risk sub-category. Source: IBM Boston 311 calls Like many other cities, Boston, MA logs hundreds of thousands of requests each year for services such as snow plowing, street cleaning, and pot hole repair. Source: City of Boston. California website visits This sample data contains 2016 website visit data for the fictional Sample Outdoors Company website by Zip Code and latitude/longitude in California. Source: IBM. Customer analysis This sample insurance data contains information from a fictional company about customer demographics, policies, and claims. Columns include state, education, income, marital status, policy type, total claim amount, and more. Source: IBM. NYPD motor vehicle collisions  New! This sample data provides a breakdown of every collision in NYC from 2015 to 2017 by location and injury. Each record represents a collision in NYC by city, borough, precinct and cross street. Source: Police Department (NYPD), NYC OpenData Sales staff  New! This sample data contains sales staff information for the fictional Sample Outdoors Company. Source: IBM SampleFile_GOSales This sample data is intended to help beginners start authoring reports and dashboards. The Sample Outdoors Company is a fictitious business operation with data for products, retailers, order methods, and year. Source: IBM Storm events 2015 Storm event data is provided by the National Weather Service (NWS) and contains statistics on personal injuries and damage estimates. Dataset Source: NCEI DSI 3910_03, gov.noaa.ncdc:C00510 Weather analytics This sample data is intended to illustrate weather analytics. Additional weather-related data has been added to the Boston 311 calls data described above. Related posts: Samples Landing Page Library of How-To Videos for IBM Cognos Analytics Please visit our IBM Business Analytics Support channel on YouTube.

Blog

Shocking Data Dashboard Confession: “I’ve Deceived You”

Shocking Data Dashboard Confession: "I've Deceived You" Years of mistaking correlation for causation When your data dashboard reaches full AI maturity, it will have to apologize: “Sorry I’ve been deceiving you.” Without you knowing, it’s delivered years of misinformation.  That dashboard solution that promised it would always be, ‘intuitive, interactive, and drag & drop’ will use its AI capability to say, “It’s not my fault; I was never equipped to show differences between causation and correlation.”  Odds are high that your data discovery vendor has been ignoring analytics. That data ‘snapshot’ in your dashboard did not make you a better leader or decision maker.  In fact, it deceived you into seeing correlation instead of causation—a mistake that has sunk many careers. “…dashboards are poor at providing the nuance and context that effective data-driven decision making demands.” -Harvard Business Review 13Jan2017 The Harvard Business Review (HBR) offers caution to executives everywhere; your dashboard can mislead you.  The HBR article “3 Ways Data Dashboards Can Mislead You” makes its point clear; you need both predictive and prescriptive analytics to get dashboards right.   Many data discovery vendors have been ignoring this for years by offering dashboards that only deal with current or past activity. “Make your data make an impact “ – Data discovery vendor The tagline offered by one vendor makes it sound like visualization is all you need to prove your point. But what if the visualization misleads you?  In the graphic below, you’ll see an obvious pattern between people who die from becoming tangled in bedsheets and per capita cheese consumption in the US (Source: Spurious Correlations).  Most dashboards and data discovery tools deceive you into the belief these groupings are interconnected.  There is correlation in values but it does NOT mean that change in one variable (cheese) is the cause of change in the values of other variables (death).  You won’t know, but your dashboard was misleading you.          “It’s far too easy — and unfortunately common — for managers to interpret the groupings in a dashboard as causative when they may not be.” -Harvard Business Review 13Jan2017 When you’re caught up in the elegance of data dashboards, it’s easy to dismiss the underlying analytics.  An attractive chart describing what has happened is undoubtedly less intimidating than an attempt to understand what will happen.   But there’s no longer an excuse for mistaking the two. Past activity is not a predictor of the future. In the past, analytics might have been inaccessible to senior management, but with new natural-language solutions, that no longer needs to be the case. Don’t wait for your data dashboard confession.  If your dashboard could talk right now it might say, “Save yourself and see what analytics can do for you.  Start a free trial of Watson Analytics before it’s too late!”

Blog

Five Things So-Called “Business Intelligence” Vendors Don’t Talk About

Many start-ups have made fortunes claiming to deliver “business intelligence.”   Don’t mistake these for business intelligence solutions. Business intelligence is not data discovery and it’s not trivial. Not unlike your general ledger, you deliver one version so executives can stake their reputation on its outputs. Unfortunately, some vendors don’t see the bigger picture of what’s required to make business intelligence trusted, efficient, and governed. Without this bigger picture, they create disconnected users, operating with outdated versions of data, rather than an ecosystem of collaboration. Here’s a tip: if they don’t want to talk about these five things, they may be hiding important details from you!   Don’t talk about cloud computing  One data discovery vendor announced their new European data center by saying, “… co-located facility will support users of its cloud analytics tool… “ and “The selected data center is ISO 27001 certified and has a disaster recovery site located in Munich, Germany.” Notice the words ‘co-located’ and ‘selected’ and lack of any details on square footage, quantity of servers, security staff and so forth. These firms avoid all the investments and data center details because they subcontract their cloud to third parties. They don’t want to discuss is staffing, infrastructure, and facilities because they don’t own, operate or guarantee any of it!  Using Cognos Analytics on Cloud, you get services from IBM, using its cloud facilities, from its 40,000 cloud professionals. Don’t talk about cloud security One Cognos Analytics on Cloud client inquired, “Are guards hired by you for your company or shared?” and, “Are all facilities used exclusively by your company, or are some shared?”  What BI vendors don’t want to discuss is that they have little to no staff residing inside these data centers and may have no dedicated staff for physical, network, or data security. On December 9th,  I reviewed the web pages of four so-called competitors and found of 20+ job openings no postings for security specialists.  Two of four competitors offered solutions exclusively on cloud.  Who’s keeping on top of constant security threats?   Using Cognos Analytics, you get IBM professionals who make it their full-time job to specialize in security.   (IBM had 55 job postings for under search term ‘Cloud Security’) Don’t talk about predictive analytics One data discovery vendor promotes “The analytics your customers want”, but they’ll avoid the word ‘predictive.’  Their idea of analytics entails colorful graphs of what has happened without predictions of what will happen.  They don’t want to discuss full analytic capabilities such as regression, bootstrapping, time-series analysis, and statistical modeling. They’ll likely but unable to discuss something simple like random sampling methods so you won’t process every record in your database during analysis.  These so-called “BI” vendors don’t offer predictive analytics but rely upon highly-graphical charts using frequency, mean and average. Cognos Analytics offers predictive analytics to see where you’re going, not where you’ve been. Don’t talk about cognitive capability These supposed “BI” vendors got their reputation by delivering pretty graphs but lack robust statistical features, staffing, and platform scalability to make cognitive computing a reality. Cognitive requires predictive analytics as a foundation to learn, suggest and understand.  Our competitors don’t invest into an analytics workbench; therefore, their cognitive capabilities will always be limited. Using IBM, your cognitive experiences are being built upon the proven foundations of IBM SPSS and IBM Watson. Don’t talk about data governance One cloud BI vendor gives you no option but to REPLICATE your data to the cloud, thus avoiding any benefit of using existing hardware and database investments. They spin this duplication and extra effort as, “Keep data living behind a firewall fresh using the [product] sync client. This client runs on a computer within your firewall and securely manages the communications with [product]. The sync client ‘pushes’ extracts of your on-premises data to [vendor/product] on a given schedule.”  They made all this redundancy of multiple data sources into the cloud sound so positive.   You are required to continually duplicate your on-premises data to the cloud and hope everyone run reports from the same version of data. If you want everyone to report using ‘accurate’ data, you must sync/duplicate rather often.  What’s enough: each day, each hour, each minute?  Over 90% of our Cognos Analytics on Cloud clients use the data and investments they have on-premises today without duplication. Using Cognos Analytics, you use existing on-premises database investments, use cloud-based data, or the combination of the two. If you want to create a nation of knowledge with users, and avoid these islands of dis-information, check out Cognos Analytics on Cloud or read our “Ten Differentiations” blog.

Blog

Journey to Cognitive Excellence: Harness the Force of a Strong Analytics Foundation #ibmwow

Julie Severance, IBM Global Leader, Data & Analytics Strategy & Initiatives, will be leading an educational panel on cognitive excellence at IBM Insight at World of Watson 2016. Join us in Las Vegas, October 24-27 for this panel and many other exciting activities. Learn more here. Becoming a cognitive business is a journey, not a destination.  A cognitive analytics culture is not something you can just buy or install.  Although the right technology is crucial, its true value arises when the organizational mindset changes. Many organizations have learned to embrace analytics, but embracing cognitive is another step entirely, and it’s one that may be even more challenging.  However, the possibilities are endless and the potential rewards make it worthwhile. It’s important to understand that analytics and cognitive technologies are fundamentally different. Analytics is a ruled-based system that applies predetermined algorithms to vast amounts of data. It requires you to know what you’re looking for, and how to ask in a way the system can understand. By contrast, a cognitive system can learn, and can interact with people using natural language. That means an unprecedented flexibility and agility: you can ask the system what you want, and it can figure out new and better ways of interpreting data and reaching goals. In other words, you no longer have to tell the computer system exactly what to do. But this cognitive flexibility also means a challenge to business operations because the organization needs to be able to respond to these changing and sometimes radically unexpected suggestions.  Moreover, people throughout your organization need to know how to leverage emerging capabilities like cognitive technology, and must understand the unique types of insights and uncertainties that a cognitive system can provide. Plenty of research shows, organizations that excel recognize the importance of building a strong foundation that embraces all forms of data and advanced analytic capabilities.  And when you introduce cognitive capabilities into the organization, the possibilities are endless.  While analytics handles the structured data, cognitive can dive into unstructured elements such as texts, pictures, blogs, social media and more.  Taken together, cognitive and analytics can address different business needs, and can see the same data from different perspectives, bringing greater insights than either individual technology. Come, join us at our panel session 2993: Journey to Cognitive Excellence - Harness the Force of a Strong Analytics Foundation at IBM Insight at World of Watson. Meet professionals from the aerospace, health care, industrial construction and IT industries (and co-authors of “The 5 Keys to Business Analytics Program Success”).  Be prepared to learn what it takes to achieve excellence in building a cognitive business (how to manage a changing strategy; tackle culture challenges when introducing new capabilities; align business priorities; quantify and demonstrate tangible business value; implement processes that balance agility, empowerment, and governance; evolve talent and skills; and architect a solution with future innovation in mind).   Register now at: http://ibm.co/CAWoW1FB  

Blog

Shining the Light on Dark Data through Data Preparation and Cognitive Analytics

Blog editor’s note: Dan Potter, the author of this blog, will be leading a session that takes a deeper look at Cognos Analytics and unlocking data insights at IBM Insight at World of Watson 2016. Join us in Las Vegas, October 24-27 for this session and many other exciting activities. Learn more here. Data preparation has historically been a major drag on the effectiveness of analytics and business intelligence solutions. It has typically required a specialized data scientist who could prepare the data, resolve any discrepancies and format it for analysis. As analytics solutions get more sophisticated and are able to process more and more types of unstructured data—including so-called “dark data,” or information that is collected, processed and stored during regular business activities but not commonly used in analytics—the challenge only grows. According to the 2015 Stratecast Cloud User Survey, up to 60 percent of a typical analyst’s time is spent on data preparation. This burden of data preparation has historically meant that analytics has been done by dedicated specialists; it simply wasn’t possible for professionals in other areas of the business to add analytic insight to their normal workflow. But analytic insights are most useful when they are widely shared throughout the enterprise and when professionals in all areas of the business have access to analytic tools to help them make sense of the data that’s relevant to their particular jobs. Fortunately, leading analytics companies have started to respond to this pressure by developing self-service data preparation tools, which use cognitive technology to help automate and standardize the data preparation process. With cognitive-guided self-service features, more users than ever before can access relevant data, prepare it for analysis, apply analytic tools, package the results in a visually-appealing format, and share them throughout the organization. Cognitive-guided self-service data preparation promises to be “the next big disruption in business intelligence”[1] Fortunately, it’s a disruption that holds enormous potential to make analytics even more useful and expand its scope to even more types of data. In the future, fewer and fewer types of data will be “dark.” To learn more about IBM Cognos Analytics and how it can help you derive insights from even more types of data, join us for Session 3026 - Shining the Light on Dark Data through Data Preparation and Cognitive Analytics at IBM World of Watson. Register now: http://ibm.co/CAWoW1FB [1] http://www.datawatch.com/resource-center/webinars/gartner-dataprep-nxt-disruption/

Blog

Grab a Slice of Data: Cognos Analytics 11.0.4 Unveils Data Sets

Reports, dashboards, and stories need data. This data might be made available to you by an administrator who creates packages, or you may have uploaded your own data from an Excel file. Cognos Analytics version 11.0.4 introduces a new type of gateway to data – they are called data sets. Data sets are created from packages or data modules. Data sets can be used to gather a customized collection of items that you use frequently. As you make updates to your data set, dashboards and stories that use the data set are also kept up-to-date the next time you open them. You define a data set by choosing one or more items (columns) from a package or data module, and apply filters to reduce the data. You’re essentially specifying the rectangle of columns and rows of data that you need. The data is extracted and stored within the Cognos Analytics system as explained here. Because the data is cached, data sets can improve query performance and reduce the workload on your database(s). Here’s some reasons to use a data set • improve query performance if your database is slow • reduce the load on an overworked database (especially during peak periods) • retain a version of the data at a specific time For data sets created from relational packages or data modules, you have the option to Summarize detailed values, suppressing duplicates. When you use this option, measure values will be aggregated to the lowest grain that is explicitly included in the data set. For example, your data warehouse stores millions of records pertaining to each transaction where units were sold, but you’re only interested in analyzing the total sales per region – if your data set contains only the Region and Units Sold columns and you use the option, the data set will only contain as many rows as there are regions. Notice in the following screens that the values in the Quantity column are much larger when the option is enabled – the data set will have much fewer rows since the quantities will be rolled up into each distinct combination of retailer and order method type where units were sold. The benefit of using this option is that it can condense the data set into fewer rows, and all else equal fewer rows lead to better performing reports and dashboards! Do not use the option if information in the details is important for your analysis. Refreshing your Data Set Through the Cognos Analytics portal, you can change a data set’s columns and filters anytime you want. You can also update its data either on-demand or schedule the refreshes to occur automatically including weekly, daily, hourly or every X minutes. The information within a data set is pre-calculated and pre-aggregated. What you see is what you get from the preview area when defining the data set to what gets stored on the system. If a package or data module truncates a column’s values and you create a data set with it, the truncated values (as opposed to the original values) will be extracted and stored. Transformations that take a long time can be completed overnight so they’re ready-to-use first thing in the morning. Data Sets from Data Sets Data sets can be sources to data modules and since you can create a data set from a data module, you can (indirectly) create a data set from one or more other data sets! Each data set enables you to further combine, summarize, and pre-calculate data that will answer to your team’s questions. With this approach you can summarize summaries to whittle down trillions of records from your Hadoop system into information that’s better suited for ad hoc exploration. Release Them Into the Wild Cognos Analytics data sets are insulated from all other systems including the underlying database so your database administrator won’t be worried about runway queries when they are being consumed. The size of a data set is easily controlled with filters. Administrators can limit the size of any single data set and the total volume that any one user can occupy on the system. Administrators can also control who is permitted to create data sets – perhaps you want to start by enabling a small group of power users before expanding. If you keep a data set small, you can rest assured that no matter what someone does with it in a dashboard they will get snappy response times. Moving Data Sets between Cognos Analytics Environments Data sets can be transferred from one Cognos Analytics environment to another. If you want to deploy into production some data sets that you tested in a staging environment, simply create a deployment in the staging environment that includes the folder(s) containing the data sets (select the deployment option "Include report output versions"* if you want the extracted data included otherwise only the metadata will be) and then import the deployment into the production environment. Data Sets Replace the Snapshot Mode of Data Modules Prior versions of Cognos Analytics offered a snapshot mode option within a data module that would extract all the data. This snapshot mode is no longer available as it’s been made obsolete by data sets. Data modules that were set to snapshot mode in a prior version will upgrade into “live” / ”regular” data modules in 11.0.4 and higher. Data sets have the following advantages over the now deprecated snapshot mode: • Data sets give you the option of extracting summarized or detailed values. • Data sets store data as a single table whereas snapshot modules store separate files for each table in the module. All else equal, a query that does not require a join will be faster. • A subset of a data module can be extracted into a data set. • Data set refreshes can be scheduled. Footnotes: *As of 11.0.5 use the option Include uploaded data instead.

Blog

Where is data uploaded into Cognos Analytics stored?

In versions of Cognos Analytics prior to 11.0.3, data sets from uploaded files and data modules in snapshot mode are saved directly to the file system. The data sets are stored in a columnar format designed to be loaded into memory quickly on demand when accessed by the Cognos query engine. As of 11.0.3, when files are uploaded, data is still converted to the columnar format, but it is stored in the Cognos content store. This is done in order to provide a central storage location and so that the life-cycle of the data is better managed when performing tasks such as move/copy and deployment. If the storage capacity of the content store database is a concern in this context, you have the option to use an external object store instead. Using an external object store for report outputs and data reduces the size of the content store and provides performance improvements for Content Manager. Note that reduced load on the Content Manager may or may not affect user wait times for dashboards and reports. There are many factors that can impact report and dashboard performance including but not limited to network speed. Using an external object store may improve dashboard and report performance in some environments but not in other environments. Although the data is stored in either an external object store or content store database, it will be automatically extracted to temporary file locations on the Application Tier Component servers when it is being used in dashboards and reports. The default location for these temporary data files is install_location\data\datafiles. These temporary data files will be removed automatically by the Cognos Analytics software. Prior to 11.0.3, the location of directory for data files needed to be set to a network path in distributed installations for Cognos Analytics – this is no longer a requirement, but can still provide benefits by having all servers sharing the same instance of the temporary files. As of version 11.0.3, this data can be included in deployments by enabling the option to include report output versions* in the export deployment specification.  By default, data that was uploaded to Cognos Analytics in a version prior to 11.0.3 will not be included in these deployments. You can use the following script to migrate data that was uploaded in 11.0.0, 11.0.1 or 11.0.2 from the file system into the content store database or external object store so that it can be included in deployments.   As of Cognos Analytics version 11.0.3, there is a utility in /bin and /bin64 named: MigrateUploadedFiles.bat MigrateUploadedFiles.sh The .bat file is to be run on Windows, while the .sh file is to be run on Unix/Linux based platforms. Usage: Options: -n : Authentication namespace, -u and -p required if specified [optional] -u : Namespace user, -n and -p required if specified [optional] -p : Namespace password, -n and -u required if specified [optional] -df : Data files location [optional] -params : Specifies program parameters with a file [optional] Example: UploadedFileMigrationTool -n namespace -u username -p password To run, simply run the following command from your /bin64 : MigrateUploadedFiles.bat If authentication is required, run: MigrateUploadedFiles.bat –n -u -p <password In either case, the user is required to have ‘System Administrator’ role. The script will run, and the output should be in the form of: D:\\bin64>MigrateUploadedFiles.bat running: log4j:WARN No appenders could be found for logger (org.apache.commons.httpclient.params.DefaultHttpParams). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. The following storeId 'iDBFE011C600F4B5CB62A4CB0CF24B433' is not an Uploaded File and is associated with file: Boston_311_calls_xlsx.parquet Successfully migrated uploaded file: Legacy1_csv.parquet The following storeId 'iBD98A1CC5A4247E6933556B44233C92C' is not an Uploaded File and is associated with file: SampleFile_GOSales_xls.parquet The following storeId 'i6CABB57DA69F4FDD9CB351F6A91B1D38' is not an Uploaded File and is associated with file: Banking_loss_events_xlsx.parquet The following storeId 'i736C61A6681A41E4A37DA5E709F4C02F' is not an Uploaded File and is associated with file: American_time_use_xlsx.parquet ******************************************************************************* Total files to migrate: 5 Number of files migrated: 1 ******************************************************************************* Successfully migrated files have been backed up in: D:\\ \bin64\..\configuration\..\data\datafiles\MigratedUploadedFiles_1465833065909.zip D:\\bin64> Footnotes: *As of 11.0.5 use the option Include uploaded data instead.