Develop and deploy your next
app on the IBM Bluemix
|Create a business intelligence and analytics service in Ruby with the dashDB
The dashDB Service available in IBM Bluemix provides a powerful, easy-to-use, and agile platform for business intelligence and analytics. It is an enterprise-class managed service that is powered by the in-memory optimized, column-organized BLU Acceleration data warehouse technology. This article demonstrates how easy it is to incorporate the dashDB service into your application so that you can focus on your application.
|Articles||08 May 2015|
|Load and import error checking for ETL users in DB2 for Linux, UNIX, and
This article demonstrates how to check LOAD and IMPORT operations through ETL calls to SYSPROC.ADMIN_CMD in a DB2 for Linux, UNIX, and Windows database partitioning (DPF) environment. Using ETL calls, you can accomplish error checking with more control and precision than most ETL tools provide.
|Articles||11 Jul 2013|
|Data mining techniques
Examine different data mining and analytics techniques and solutions. Learn how to build them using existing software and installations.
|Articles||11 Dec 2012|
|Big data business intelligence analytics
Learn about integrating business intelligence and big data analytics. Explore the similarities, differences, and what choices to consider.
|Articles||20 Nov 2012|
|Solving Operational Business Intelligence with InfoSphere Warehouse Advanced Edition
In this IBM Redbooks publication we explain how you can build a business intelligence system with InfoSphere Warehouse Advanced Enterprise to manage and support daily business operations for an enterprise, to generate more income with lower cost. We describe the foundation of the business analytics, the Data Warehouse features and functions, and the solutions that can deliver immediate analytics solutions and help you drive better business outcomes. We show you how to use the advanced analytics of InfoSphere Warehouse Advanced Enterprise Edition and integrated tools for data modeling, mining, text analytics, and identifying and meeting the data latency requirements. We describe how the performance and storage optimization features can make building and managing a large data warehouse more affordable, and how they can help significantly reduce the cost of ownership. We also cover data lifecycle management and the key features of IBM Cognos Business Intelligence.
|Redbooks||08 Oct 2012|
|Virtualized Business Intelligence with InfoSphere Warehouse
This IBM Redbooks publication discusses the deployment of a BI cloud solution. It includes details such as understanding the architecture of a cloud, planning implementation, integrating various software components, and understanding the preferred practices of running a cloud deployment. Essentially, this book can be used as a guide by anyone who is interested in deploying a virtualized environment for a BI cloud solution.
|Redbooks||08 Oct 2012|
|Data.gov for government agencies
Because people are more aware of the value of open data, entire new economies have sprung up around its use and management. In 2009, the U.S. Federal Government launched Data.gov, a site to aggregate feeds of government data. Pressure on agencies to publish information at Data.gov has been steady. The Open Government Directive of 2009 requires all Federal agencies to post at least three high-value data sets online and register them on Data.gov. Learn about Data.gov, the basic information your agency needs to know to participate in this revolution in government, and ideas for doing so efficiently.
Also available in: Russian
|Articles||28 Feb 2012|
|Back up, restore, and roll forward in an InfoSphere Warehouse data
Knowing how to back up and restore your database is a fundamental skill for every database administrator. In a partitioned database environment, where your database is split across multiple partitions or nodes, there are special considerations. This article introduces the basics and demonstrates step-by-step the process of backup, recovery, and rollforward in the IBM InfoSphere Warehouse partitioned environment.
|Articles||08 Dec 2011|
|Effectively use DB2 data movement utilities in a data warehouse environment
Choosing proper data movement utilities and methodologies is key to efficiently moving data between different systems in a large data warehouse environment. To help you with your data movement tasks, this article provides insight on the pros and cons of each method with IBM InfoSphere Warehouse, and includes a comparative study of the various methods using actual DB2 code for the data movement.
|Articles||17 Nov 2011|
|Online roll-out with table partitioning in InfoSphere Warehouse
Starting with DB2 9.7.1, the table partitioning feature is enhanced with support for online roll-out using detach. With online roll-out, queries continue to access the partitioned table while one or more partitions of the table are being detached using the ALTER TABLE DETACH PARTITION command. This article discusses how online detach improves the availability of the warehouse and describes some best practices to leverage and monitor the new behavior. A stored procedure is provided to help database administrators to script the post-detach processing on the target table, that is, the new table created as a result of the detach operation. The article also demonstrates how the same procedure can be used to simulate synchronous detach behavior in certain application scenarios. We conclude with frequently asked questions related to detach.
|Articles||27 Oct 2011|
|Creating an active-active data warehouse topology using IBM InfoSphere Warehouse
This article explains how you can create an active-active environment for IBM InfoSphere Warehouse to meet the highest availability requirements using DB2 for Linux, UNIX, and Windows, WebSphere Edge Components, and Q Replication.
|Articles||21 Oct 2011|
|Configure and monitor InfoSphere Warehouse with Optim Performance Manager
Optim Performance Manager Extended Edition provides end-to-end database transaction response time monitoring capability for InfoSphere Warehouse applications with its Extended Insight capability. This capability gives you insight into the transaction and SQL statement response time metrics of a database application throughout all layers of the software stack; from the time that the SQL is issued in the application, and through the network and database server. Special support is available for InfoSphere Warehouse SQL Warehousing Tool (SQW) applications by recognizing transactions and SQL statements from InfoSphere Warehouse Application Server automatically. This article provides detailed information for installing, configuring, and validating the Optim Performance Manager Extended Insight feature for InfoSphere Warehouse applications.
|Articles||25 Aug 2011|
|Use the IBM Industry Model Information Insurance Warehouse to define smart and mature
In this tutorial, understand the method for developing data models for data warehouse projects using the IBM Industry Model Insurance Information Warehouse (IIW), which is part of the IBM Industry Models product defined for the domain of insurance. The tutorial shows the best approach to develop core data warehouse (CDW) models and data mart (DM) models. The tutorial also introduces the recommended data warehousing development method (DWDM) to deal with the IIW model pattern framework to architect DWH solutions for insurance companies.
|Articles||23 Dec 2010|
|Tools and XML functionality for DB2 pureXML users
This article provides guidance to database users in choosing XML tools to help them with the new responsibilities that arise now that IBM DB2 can efficiently store and manipulate XML data with pureXML. The size of XML, which can vary from a few kilobytes (KB) to many megabytes (MB) per document instance, and the hierarchical structure creates the need for new tool capabilities to ease the tasks of creating, viewing, editing, and querying XML instances and schemas when using DB2 pureXML. This article reviews the XML capabilities in tools available from IBM for working with XML database objects, the different job roles that are impacted by having XML in the database, and the specific tasks involved. It then describes the key XML-related tasks that arise and outlines which tools provide capabilities to help with those tasks.
|Articles||02 Dec 2010|
|Find databases with protected health information
Identity theft and medical fraud are growing problems. They are so big the U.S. government is spending billions of dollars securing its own computer systems and has written thousands of pages of new regulations that you must follow to help protect your customer and employee data. To comply with new regulations and properly secure data, you will need to find personally identifiable information (PII) and protected health information (PHI) in your databases and documents. Both PHI and PII are conceptually easy to understand but very difficult to track in the thousands of relational data stores, files, and spreadsheets that make up a typical organization's IT environment. This article describes some methods to automatically identify and inventory PII, PHI, and other sensitive data with databases and spreadsheets using Java technology and the Apache Ant build tool.
|Articles||19 Oct 2010|
|Development Notebook: The evolution of the IBM Smart Analytics System
Two key IBM developers discuss the history and background of the IBM Smart Analytics System.
|Articles||10 Oct 2010|
|Smarter is: Breaking Data and Disease Down to Size
A hospital in China teams with IBM to create a tool that can help research treatments for chronic kidney disease.
|Articles||10 Oct 2010|
|DB2 Data Warehouse Center to DB2 Data Warehouse Edition (V9) migration tool
The DWC to DWE migration tool is an Eclipse-based tool that helps DWC users migrate their existing DWC (V8) artifacts into the DWE (V9) SQL Warehousing Tool environment.
|Downloads||11 Aug 2010|
|Data Architect: You want to govern what?
Robert Catterall is a database guy, and he needs to be convinced of the need for data governance. With help from Steven Adler, director of Information Governance Solutions at IBM, he explores the tenets of data governance and discusses how it can help database teams.
|Articles||30 Jul 2010|
|Smarter is: Delivering social services where they're needed most
Alameda County Social Services Agency uses IBM DB2, InfoSphere Identity Insight Solutions, and Cognos to implement a Social Services Integrated Reporting System that helps generate the information that the agency needs to continue to qualify for state and federal funding. The solution also helps reduce waste and direct resources where they are most needed.
|Articles||30 Jul 2010|
|Predictive analytics on SAP with SPSS and InfoSphere Warehouse
Predictive analytics software helps you to find non-obvious, hidden patterns in large data sets. Current tools for predictive analytics, such as SPSS (an IBM company) and IBM InfoSphere Warehouse, expect data to be represented in an appropriate way before the actual analysis can take place. However, you may have cases where the data you want to analyze is not readily available in a format these tools can recognize. For example, SAP systems are widely used by many companies across a variety of industries, but data in SAP systems is not directly accessible to these tools. This article shows you how to use IBM InfoSphere Information Server to extract data from SAP systems for analysis within InfoSphere Warehouse and SPSS PASW Modeler.
|Articles||08 Jul 2010|
|InfoSphere Warehouse: A Robust Infrastructure for Business Intelligence
The InfoSphere Warehouse platform provides a fully integrated environment built around IBM DB2® 9.7 server technology on Linux®, UNIX® and Microsoft® Windows® platforms, as well as System z®. Common user interfaces support application development, data modeling and mapping, SQL transformation, online application processing (OLAP) and data mining functionality from virtually all types of information. Composed of a component-based architecture, it extends the DB2 data warehouse with design-side tooling and runtime infrastructure for OLAP, data mining, inLine analytics and intra-warehouse data movement and transformation, on a common platform.
|Redbooks||21 Jun 2010|
|Taking the first steps toward data quality
How do you create a data quality project or program? What specific steps should you take, and why? What should you focus on and what can you leave alone? This article outlines smart moves you can make to kick-start your data quality efforts, and presents a plan for initiating a data quality program.
|Articles||30 Apr 2010|
|Using IBM InfoSphere Warehouse Design Studio with pureXML data,
Part 2: Create a control flow for multiple ETL jobs involving XML
Learn how to integrate business-critical XML data into your data warehouse using IBM InfoSphere(TM) Warehouse Design Studio and DB2(R) 9.7 pureXML(R). This two-part article series provides step-by-step instructions for using pureXML as both a source and target data source for extract, transform, and load (ETL) operations developed with InfoSphere Warehouse Design Studio. This article describes how to build a single control flow that calls multiple data flows that extract, transform, and load XML data in a specific sequence.
Also available in: Portuguese
|Articles||01 Apr 2010|
|Using IBM InfoSphere Warehouse Design Studio with pureXML data,
Part 1: Create an ETL data flow to populate a hybrid data warehouse
Learn how to integrate business-critical XML data into your data warehouse using IBM InfoSphere(TM) Warehouse Design Studio and DB2(R) 9.7 pureXML(R). This two-part article series provides step-by-step instructions for using pureXML as both a source and target data source for extract, transform, and load (ETL) operations developed with InfoSphere Warehouse Design Studio. This article explains how to build a single data flow that uses an XML-based source table to populate two target data warehouse tables. One of these tables contains only relational data, while the other contains both relational and XML data.
|Articles||25 Mar 2010|
|High-performance data mining
An examination of the capabilities of IBM InfoSphere Balanced Warehouse through a scoring performance study
|Articles||19 Oct 2009|
|Using the IBM InfoSphere Warehouse 9.7 Administration Console,
Part 1: Getting started and setting up
This series of articles introduces the IBM InfoSphere(R) Warehouse Administration Console 9.7, which is part of the IBM InfoSphere Warehouse product. The InfoSphere Warehouse Administration Console provides an integrated Web portal to administer, monitor, and configure related InfoSphere Warehouse components in an operational environment. This Part 1 of this series provides a general introduction to the new features delivered in version 9.7, and it covers topics related to configuration and resource management, including security configuration, system metadata configuration, configuration services management, logging services management, connection management, and system resource management.
Also available in: Russian
|Articles||21 Sep 2009|
|Using virtual cubes in IBM InfoSphere Warehouse 9.7 to combine business
scenarios and to improve performance
Virtual cubes are one of the new Cubing Services features in IBM InfoSphere(TM) Warehouse 9.7. A virtual cube provides a way to merge different cubes together to allow a single query destination that returns merged results from the cubes that compose it. Virtual cubes can be used to drastically improve the response time of the cube server queries by using efficient data partitioning for optimum cache utilization (in some cases, over 100 times better response times). Virtual cubes also offer a solution for combining results by merging different regional cubes into a country cube. They also enable merging sales numbers with currency exchange rates to provide a global view of the business. This article explains how virtual cubes are created, how they work, and how to use them for InfoSphere Warehouse Cubing Services 9.7.
|Articles||10 Sep 2009|
|Designing and deploying a security model using IBM InfoSphere Warehouse
In IBM InfoSphere(TM) Warehouse V9.7, Cubing Services modify the way you secure your cubes and provide a way for you to secure your dimensions. You might need to limit access to your OLAP data at the level of the cube or at the more granular level of the dimension, depending on your security requirements. In this article, you will learn how to define security on cubes and dimensions by creating roles, policies, and authorizations in the Design Studio. This article describes how to export the security model to a file and how to use the Administration Console to import the security model to the InfoSphere Warehouse control database. After importing the security model, you will learn how to instruct the Cube Server to enforce the rules in the security model.
|Articles||03 Sep 2009|
|Text Analysis in InfoSphere Warehouse, Part 3: Develop and integrate custom UIMA text analysis engines
In the first two articles of this series, you learned about IBM InfoSphere Warehouse text analysis capabilities, how to use regular expressions and dictionaries to extract information from text, and how to publish the results with a Cognos report. This article describes how to use the Unstructured Information Management Architecture (UIMA) framework to create a custom text annotator and use it in InfoSphere Warehouse. The ability of InfoSphere Warehouse to use UIMA based annotators in analytic flows is a powerful feature. You can write custom annotators that can extract almost any information from text. Plus you can use UIMA based annotators that are provided by IBM, other companies, and many universities. For example, you can find UIMA annotators that tokenize words and extract concepts such as persons or sentiments.
Also available in: Spanish
|Articles||27 Aug 2009|
|Text analysis in InfoSphere Warehouse, Part 2: Dictionary-based information extraction combined with IBM Cognos
Unstructured information represents the largest, most current, and fastest growing source of information that is available today. This information exists in many different sources such as call center records, repair reports, product reviews, e-mails, and many others. The text analysis features of IBM InfoSphere(TM) Warehouse can help you uncover the hidden value in this unstructured data. This series of articles covers the general architecture and business opportunities of analyzing unstructured data with the text analysis capabilities of InfoSphere Warehouse. The integration of this capability with IBM Cognos(R) reporting enables people across the company to exploit the text analysis results. The first article of this series gave an overview of the text analysis capabilities in InfoSphere Warehouse and showed how to use regular expressions to extract concepts from free-form text. This second article shows you how to use dictionaries for concept extraction and how you can use taxonomies to structure them. It also explains how you can present the results in an interactive Cognos report.
|Articles||09 Jul 2009|
|Generate Cognos reports using InfoSphere Warehouse Cubes
Learn how to generate Cognos professional reports using InfoSphere Warehouse data models. Cognos is SOA-based, light-weight, and user-friendly, and no programming skills are required to create reports. This article serves as a start for users who may have used other tools in the past and now want to generate professional Cognos reports using InfoSphere Warehouse cubes.
Also available in: Portuguese
|Articles||18 Jun 2009|
|Text analysis in InfoSphere Warehouse, Part 1: Architecture overview and example of information extraction with regular expressions
Unstructured information represents the largest, most current, and fastest growing source of information that is available today. This information exists in many different sources such as call center records, repair reports, product reviews, e-mails, and many others. The text analysis features of IBM InfoSphere Warehouse can help you uncover the hidden value in this unstructured data. This series of articles covers the general architecture and business opportunities of analyzing unstructured data with the text analysis capabilities of InfoSphere Warehouse. The integration of this capability with IBM Cognos reporting enables people across the company to exploit the text analysis results. This first article introduces the basic architecture of the text analysis feature in InfoSphere Warehouse and includes a technical example showing how to extract concepts from text using regular expressions.
|Articles||04 Jun 2009|
|Using DB2 XQuery to extract data mining results stored as PMML
Data mining is the process of finding rules and patterns in structured data. DB2(R) data mining uses Intelligent Miner, which is part of InfoSphere(R) Warehouse. Intelligent Miner(R) stores those results in Predictive Model Markup Language (PMML) format, which is based on XML. Since the launch of DB2 9, information stored in XML can be processed efficiently using XQuery. Find out how easily you can use DB2 XQuery to create your own access methods based on your data mining results.
|Articles||21 May 2009|
|Install and configure InfoSphere Warehouse on System z
This article takes you through the installation of InfoSphere Warehouse on a Linux partition on System z. Learn about pre-installation requirements, then walk through the steps for a successful installation.
|Articles||14 May 2009|
|Multidimensional Analytics: Delivered with InfoSphere Warehouse Cubing Services
In this IBM Redbooks publication, we discuss and describe a multidimensional data warehousing infrastructure that can enable solutions for complex problems in an efficient and effective manner. The focus of this infrastructure is the InfoSphere Warehouse Cubing Services Feature. With this feature, DB2 becomes the data store for large volumes of data that you can use to perform multidimensional analysis, which enables viewing complex problems from multiple perspectives, which provides more information for management business decision making. This feature supports analytic tool interfaces from powerful data analysis tools, such as Cognos 8 BI, Microsoft Excel, and Alphablox. This is a significant capability that supports and enhances the analytics that clients use as they work to resolve problems with an ever growing scope, dimension, and complexity. Analyzing problems by performing more detailed queries on the data and viewing the results from multiple perspectives yields significantly more information and insight. Building multidimensional cubes based on underlying DB2 relational tables, without having to move or replicate the data, enables significantly more powerful data analysis with less work and leads to faster problem resolution with the capability for more informed management decision making. This capability is known as No Copy Analytics and is made possible with InfoSphere Warehouse Cubing Services.
|Redbooks||05 May 2009|
|Integrate InfoSphere Warehouse Data Mining with IBM Cognos
Reporting, Part 4: Customer segmentation with InfoSphere Warehouse and Cognos
In the previous articles of this series, you learned different techniques for integrating InfoSphere Warehouse Data Mining and simple Cognos reports. This latest article teaches you how to use some of the same integration techniques to create a more complex report that focuses on the task of customer segmentation. Customer segmentation allows companies to cluster their customers into characteristic groups. One important issue in this task is to explain to the user the meaning of the individual customer segments. Interactive Cognos reports can help you do this. The article uses a step-by-step example, to teach you how to create a report that visualizes cluster statistics and, thus, allows you to find out which customers in a given segment are special. The article also shows you how to enable drill-through for accessing details of individual customers within a segment.
Also available in: Vietnamese
|Articles||29 Jan 2009|
|Integration of InfoSphere Warehouse Data Mining with IBM Cognos Reporting, Part 3: Invoke mining dynamically from Cognos using a market basket analysis example
Association rules express which items, events or other entities often occur simultaneously in large datasets. This knowledge can be applied, for instance, in market basket analysis to leverage cross-selling potentials by recommending products that are ofter bought together. You can apply association rule mining in InfoSphere Warehouse and export the resulting model to Cognos reports, similar to the way previous articles in this series did with Cluster and Classification models. Since association rule mining is a highly interactive task, a better solution would be to allow the user to call mining directly from a Cognos report, possibly specifying additional parameters. Such an approach can be denoted as dynamic or ad hoc mining. In this article you will learn to achieve this.
|Articles||30 Dec 2008|
|Integrate IBM InfoSphere Warehouse data mining with IBM Cognos
reporting, Part 2: Deviation detection with InfoSphere Warehouse and Cognos
Learn advanced techniques, such as drill-down and the extraction of structured information from data mining models with Cognos. Using the included business scenario and running example, understand the data mining task of deviation detection, that is, the task of identifying unnatural data records. See how to find such records with IBM InfoSphere Warehouse data mining, and create interactive reports that allow interactive exploration.
|Articles||26 Nov 2008|
|Integrate InfoSphere Warehouse data mining with IBM Cognos
reporting, Part 1: Overview of InfoSphere Warehouse and Cognos integration
Get an introduction to the basic integration architecture involved in integrating InfoSphere Warehouse data mining with IBM Cognos reporting, in Part 1 of this series. Also, examine a technical case study to gain a basic understanding of how to achieve the integration.
|Articles||30 Oct 2008|
|Hints and tips for using the Business Intelligence and Reporting Tools
The open source, Eclipse-based Business Intelligence and Reporting Tools project brings advanced reporting capabilities to Information Management products such as DB2 Data Warehouse Edition and WebSphere RFID Information Center. This article shows you how to go beyond the basics to implement additional functions to meet the detailed reporting needs of your user community.
|Articles||09 Aug 2007|
|Advanced topics for DB2 Data Warehouse Edition users, Part 3: Command Line Interface for DB2 Data Warehouse Edition SQL Warehousing
This article introduces an early version of the IBM DB2 Data Warehouse Edition Administration Command Line Interface. The DWE CLI extends the existing infrastructure to support execution of administrative and monitoring tasks in non-GUI environments, the automation of recurring tasks, and handling of large task batches.
|Articles||05 Apr 2007|
|Improve DB2 query performance in a business intelligence environment
Running large queries efficiently is a top performance challenge in a business intelligence environment. Learn techniques for improving DB2 data server query performance in this environment. Walk through various methods, step-by-step, then experiment on your own system. Each method is applied to a single SQL statement, using the db2batch tool to measure performance.
|Articles||22 Mar 2007|
|Advanced topics for DWE users Part 2: Best practices for choosing DB2 Data Warehouse Edition SQL Warehousing variable phases
Optimize your DB2 Data Warehouse Edition flow design with a good understanding of variable usage in order to promote user satisfaction, support flow reuse, and help reduce administration overhead. In this article, find recommendations and best practices on how to best utilize different variable phases to maximize the benefits of variable usage in SQL Warehousing Tool data flows and control flows during design time, and how each variable phase can impact the runtime behavior when these flows are executed.
|Articles||01 Feb 2007|
|Advanced topics for DB2 Data Warehouse Edition users, Part 1: Crash recovery utility for DB2 Data Warehouse Edition SQL Warehousing Tool
Protect your data warehouse environment using a new, downloadable crash recovery utility. With this tool, perform health checks and recover inconsistent IBM DB2 Data Warehouse Edition SQL Warehousing runtime metadata caused by unhandled interrupts. After the tool corrects the inconsistent metadata, the interrupted process instances are available for restart or termination.
|Articles||18 Jan 2007|
|Text Mining for associations using UIMA and DB2 Intelligent Miner
Get more value from your unstructured information. Explore how you can mine text using UIMA and DB2 Intelligent Miner to bridge from unstructured to structured information.
|Articles||02 Feb 2006|
|IBM Redbooks: Business Performance Management . . . Meets Business Intelligence
In this IBM Redbook, we discuss business performance management (BPM) and its integration with business intelligence. BPM is all about taking a holistic approach for managing business performance and achieving the business goals.
|Articles||16 Aug 2005|
|Deliver an effective and flexible data warehouse solution, Part 1: Engage your customers in planning the data warehouse project
Take a flexible and effective approach to plan, design, and implement a basic data warehouse solution with this article series. Part 1 focuses on the customer engagement process and project planning.
|Articles||16 Jun 2005|
|Business Intelligence solutions architecture
Build a data warehouse solution using IBM technology.
|Articles||26 May 2005|
|Data mining with association rules using WebSphere Commerce Analyzer
This short tutorial shows how you can create association rule mining against the WebSphere Commerce operational database or against a WebSphere Commerce Analyzer data mart where the production system is not impacted. You can use the results from association rule mining to set up bundles of packages that customers tend to buy together.
|Articles||24 Nov 2004|
|Building business intelligence skills
Data warehousing and business intelligence projects require very specific and sound technical skills. It is no mean feat to collect data from many different sources and ensure it is validated, combined, structured, stored, distributed and analyzed reliably and correctly. This requires a sound knowledge of information analysis techniques, of data warehousing and business intelligence technology. Additionally, one also needs a level of business acumen to understand how business intelligence technology can support the decision making process within the organization.
|Articles||22 Jan 2004|
|Using DB2 UDB OLAP functions
Online analytical processing (OLAP) functions are very flexible and powerful. Using them, you may come up with simple solutions to some problems that would otherwise require either iteration through one or several cursors or recursion. In some other cases it is much easier to write a query using OLAP functions or auxiliary tables than to write an equivalent query not using them.
|Articles||22 Jan 2004|
|Create an intelligent and flexible solution with BPM, Business Rules, and Business Intelligence: Sense and respond in the data warehouse
In our first article, we introduced the overall concepts of ondemand business, business process management (BPM), business rules engine and business intelligence. In the second article, we demonstrated how a business rules engine can serve to externalize policy from the business process manager. In the third article, we discussed the details of how to make business intelligence data visible to the business process. In this final article in the series, we touch on analytics in concert with BPM can create a dynamic and flexible sense and respond environment.
|Articles||04 Dec 2003|
|Create an intelligent and flexible solution with BPM, Business Rules, and Business Intelligence: Data warehouse visibility
This article focuses on the role of business intelligence in an intelligent and flexible BPM solution. It discusses the application of on-line analytical processing (OLAP) technology for advanced analytics, considering both OLAP access and integration. This includes drilling down to the technical details of implementing a dynamic pricing model. Finally, it introduces DB2 Cube Views, an extension to the OLAP paradigm which, when used with OLAP products, can be used to produce optimized OLAP solutions for enterprises.
|Articles||20 Nov 2003|
|Enhance Your Business Applications: Simple Integration of Advanced Data Mining Functions
Data mining can be used just like any other standard relational function. Part 1 of this redbook helps business analysts and implementers to understand and position these new DB2 data mining functions. Part 2 provides examples for implementers on how to easily and quickly integrate the data mining functions in business applications to enhance them. And part 3 helps database administrators and IT developers to configure these functions once to prepare them for use and integration in any application.
|Redbooks||08 Jan 2003|
|DB2 UDB's High-Function Business Intelligence in e-business
This IBM Redbook deals with exploiting DB2 UDB's materialized views (also known as ASTs/MQTs), statistics, analytic, and OLAP functions in e-business applications to achieve superior performance and scalability. This redbook is aimed at a target audience of DB2 UDB application developers, database administrators (DBAs), and independent software vendors (ISVs).
|Redbooks||03 Oct 2002|
|DB2 OLAP Server V8.1: Using Advanced Functions
This redbook explores DB2 OLAP Server V8.1 enhancements and strengths in the areas of scalability, performance, high concurrency, high availability and administration and how to deploy them in a whole enterprise environment.
|Redbooks||03 Oct 2002|
|Connector for SAP R/3: A Ready-to-go Warehousing Solution
Realize the value of integrating critical SAP R/3 data with other important business data for enhanced business intelligence. Incorporate SAP business objects into your DB2 data warehouse quickly and easily using Connector for SAP R/3.
|Articles||01 Apr 2002|
|New OLAP Miner Feature
OLAP Miner Fixpak 6 provides a no charge DB2 OLAP Miner feature, which can help you analyze your business data by automating the ability to scan through OLAP cubes to find atypical values.
|Product documentation||06 Dec 2001|
|Mining Your Own Business in Telecoms using DB2 Intelligent Miner for Data
This IBM Redbook is a solution guide to address the business issues in telecommunications by real usage experience and to position the value of DB2 Intelligent Miner for Data in a Business Intelligence architecture as an integrated solution.
|Redbooks||12 Oct 2001|
|Mining Your Own Business in Health Care Using DB2 Intelligent Miner for Data
This IBM Redbook is a solution guide to address the business issues in health care by real usage experience and to position the value of DB2 Intelligent Miner for Data in a Business Intelligence architecture as an integrated solution.
|Redbooks||12 Oct 2001|
|Operation Data Mining
This article explains the considerations in operationalizing data mining, how DB2 Intelligent Miner Scoring brings the model deployment process out of the hands of analysts into the standard database administrative process, and what data mining developments to expect in the future.
|Articles||27 Jul 2001|
|e-Business Intelligence Front-End Tool Access to OS/390 Data Warehouse
Based on the IBM e-BI architectural framework, this Redbook explains the building blocks required to connect front-end e-BI user tools to the OS/390 data warehouse or data mart. It also shows the Web integration in the architecture and the accessibility of the data warehouse from browser-based clients in an e-business environment.
|Redbooks||03 Jul 2001|
|Reading between the lines
This article discusses methods of mining unstructured data including three fundamental text mining operations: clustering, categorization, and information retrieval.
|Articles||01 Mar 2001|