Technical library

  • spacer Filter by products, topics, and types of content

    (114 Products)

    (113 Topics)

    (15 Industries)

    (15 Types)

 

1 - 100 of 345 results | Next Show Summaries | Hide Summaries Subscribe to search results (RSS)

View Results
Title none Date down
InfoSphere Guardium data security and protection for MongoDB Part 2: Configuration and policies
This article series describes how to monitor and protect MongoDB data using IBM InfoSphere Guardium, including the configuration of the solution, sample monitoring use cases, and additional capabilities such as quick search of audit data and building a compliance workflow using an audit process. Part 2 describes how to configure InfoSphere Guardium to collect MongoDB traffic and describes how to create security policy rules for a variety of typical data protection use cases, such as alerting on excessive failed logins, monitoring privileged users, and alerting on unauthorized access to sensitive data. Many organizations are just getting started with MongoDB, and now is the time to build security into the environment to save time, prevent breaches, and avoid compliance violations.
Also available in: Chinese   Portuguese  
16 Sep 2014
Feed InfoSphere Streams applications with live data from databases
This tutorial shows you how to connect IBM InfoSphere Data Replication Change Data Capture (CDC) as a near real-time data source for InfoSphere Streams. Walk through several integration options for a ready-to-use and custom user exit, then explore the pros and cons of each. Downloadable example sources for the user exit are provided and can be customized to fit your requirements.
04 Sep 2014
Reduce the source footprint of CDC by querying databases for archive logs
Learn how the Change Data Capture (CDC) Oracle Redo physical standby configuration in InfoSphere Data Replication allows CDC to read logs off the standby device regardless of the log shipment method. This configuration reduces the source footprint by querying the standby database for archive log information.
21 Aug 2014
Jumping through Hadoop: Stream Big Data video on a mobile app by integrating IBM Worklight with IBM InfoSphere BigInsights on IBM Bluemix
This article explains how you can integrate video data streaming with a mobile application. In doing so, it also highlights some of the business problems addressed and opportunities available by using several cutting-edge technologies: IBM InfoSphere BigInsights on IBM Bluemix and IBM Worklight. This is an example of a compelling trend that demands cloud service integration of big data to mobile devices for video streaming.
Also available in: Japanese  
13 Aug 2014
A brief comparative perspective on SQL access for Hadoop
Although Hadoop is often thought of as the one-size-fits-all solution for big data processing problems, the project is limited in its ability to manage large-scale graph processing, stream processing, and scalable processing of structured data. Learn about Big SQL, a massively parallel processing SQL engine that is optimized for processing large-scale structured data. See how it compares to other systems that were recently introduced to improve the efficiency of the Hadoop framework for processing large-scale structured data.
12 Aug 2014
Develop an iOS application with the InfoSphere Business Glossary REST API
InfoSphere Business Glossary enables you to create, manage, and share an enterprise vocabulary and classification system. InfoSphere Business Glossary includes a REST API that makes glossary content easier to consume by enabling the development of custom applications. In this article, step-by-step instructions show how to develop a dynamic iOS application using the InfoSphere Business Glossary REST API. The application lets users find business glossary assets, examine the asset's details, and contact the steward using the native phone and email applications on the iOS device. After building the sample application, you'll be on your way to using the REST API to create your own custom applications with InfoSphere Business Glossary.
Also available in: Chinese  
07 Aug 2014
Develop a custom data migration tool for Reference Data Management
Migrating data from one IBM InfoSphere Master Data Management (MDM) Reference Data Management installation to another can be a challenging task. Using an IBM internal tool as an example, this article explores how to develop a custom solution for complex migration problems using Reference Data Management REST or MDM SOAP services. The solution is for all operating systems, application servers, and databases. The custom tool can migrate data from a lower version of Reference Data Management to a higher version, specifically from V10 to V11. Your custom tool can be designed to work well for future versions of Reference Data Management (assuming the service interfaces remain the same).
Also available in: English   Chinese  
31 Jul 2014
IBM InfoSphere Optim Data Growth: Setting up your first Archive
This article introduces the IBM InfoSphere Optim product. It explains its fundamentals elements, and shows step-by-step how to set up your first archive request; thus help technical audience in achieving comfort of getting on to product quickly.
21 Jul 2014
Integrate data with SoftLayer Object Storage by using IBM InfoSphere DataStage
This article illustrates how to use IBM InfoSphere DataStage to integrate with SoftLayer Object Storage.
10 Jul 2014
Integrate data with Cloudant and CouchDB NoSQL database using IBM InfoSphere Information Server
In this article, learn how to use the Cloudant database and CouchDB with the IBM InfoSphere DataStage V11.3 Hierarchical stage (previously called XMLConnector). Detailed examples show how to invoke the Cloudant API, using the REST service of the Hierarchical Data stage, to access and change data on the Cloudant DB with support of HTTP basic authentication. You can modify the example to perform the same integration operations on a CouchDB database. The examples also use other steps of the Hierarchical stage to retrieve documents and parse or compose the retrieved documents into a relational structure.
Also available in: Chinese  
10 Jul 2014
Deploy InfoSphere MDM Collaborative Edition onto a cluster, Part 1: Strategies for mixed clustered topologies on an application server
This two-part tutorial shows how to set up a typical IBM InfoSphere Master Data Management (MDM) Collaborative Edition V11 clustered environment. This first part describes how to deploy InfoSphere MDM Collaborative Edition V11 onto a cluster. Follow along step-by-step with detailed examples of several clustering strategies. Also learn about the required configuration to enable the IBM HTTP Server for caching and load balancing the cluster nodes.
10 Jul 2014
Integrate the Information Governance Catalog and IBM InfoSphere DataStage using REST
IBM InfoSphere DataStage is a data integration tool that lets you move and transform data between operational, transactional, and analytical target systems. In this article, learn to use InfoSphere DataStage to integrate with the REST resources of the Information Governance Catalog Glossary. A sample use case shows how to design a DataStage job to take advantage of the Hierarchical Data stage to access and author the Information Governance Catalog Glossary content. The REST step is a new capability of the Hierarchical Data stage (previously called XML Connector in InfoSphere Information Server) to invoke the REST Web services with support for different authentication mechanisms and Secure Socket Layer (SSL). Learn to parse the response body of the REST call and apply transformations on the data.
03 Jul 2014
Using the MDM Application Toolkit to build MDM-centric business processes, Part 4: Work with MDM integration services
This is the fourth article in a series that describes how to create process applications for master data by using IBM Business Process Manager (BPM). Specifically, this series refers to the IBM InfoSphere Master Data Management (MDM) application toolkit and IBM BPM 8.0.1, both of which are provided with InfoSphere MDM 11.0. This article explores the BPM integration services provided with the Application Toolkit. Learn how these services can help you to create workflows easily that integrate with InfoSphere MDM.
26 Jun 2014
Prepare the server environment to integrate IBM DataStage and InfoSphere Data Replication CDC
IBM InfoSphere Data Replication (IIDR) Change Data Capture (CDC) offers direct integration with InfoSphere DataStage by using a transaction stage job. In this article, follow the step-by-step process that outlines all the required tasks to prepare the DataStage environment to enable IIDR CDC integration.
19 Jun 2014
Measure the impact of DB2 with BLU Acceleration using IBM InfoSphere Optim Workload Replay
In this article, learn to use IBM InfoSphere Workload Replay to validate the performance improvement of InfoSphere Optim Query Workload Tuner (OQWT) driven implementation of DB2 with BLU Acceleration on your production databases. The validation is done by measuring the actual runtime change of production workloads that are replayed in an isolated pre-production environment.
Also available in: Chinese  
08 May 2014
Archiving and recovery solutions for IBM Business Process Manager using InfoSphere Optim
Using simplified examples, this article shows how you can use IBM InfoSphere Optim to control data growth and, at the same time, maintain data privacy with IBM BPM.
Also available in: Russian  
30 Apr 2014
Use change data capture technology in InfoSphere Data Replication with InfoSphere BigInsights
Learn how to capture the changes made on source transactional databases such as IBM DB2 and Oracle, and replicate them to the Apache Hadoop Distributed File System in IBM InfoSphere BigInsights. Use change data capture replication technology in InfoSphere Data Replication 10.2 for InfoSphere DataStage with InfoSphere BigInsights support.
25 Mar 2014
Using the MDM Application Toolkit to build MDM-centric business processes, Part 3: Manage MDM hierarchies in BPM with the Application Toolkit
This is the third article in a series that describes how to create process applications for master data by using IBM Business Process Manager (BPM). Specifically, this series refers to the IBM InfoSphere Master Data Management (MDM) application toolkit and IBM BPM 8.0.1, both of which are provided with InfoSphere MDM 11.0. This article focuses on extending the BPM Hello World scenario (introduced in Part 1) by displaying MDM hierarchical data in a BPM application. Learn how a REST server for the Application Toolkit is installed and configured to retrieve data from the MDM operational server. You'll use the IBM Process Designer, which is a component of BPM, to create and modify process applications.
15 Mar 2014
Ensuring transactional consistency with Netezza when using CDC and DataStage
This article shows you how to configure the Netezza Connector properly when transactions coming from InfoSphere Data Replication’s Change Data Capture (CDC) are first passed through DataStage before being written to PureData System for Analytics (Netezza). Walk through a use case where a flat file implementation provides near real-time experience. The author highlights Netezza Connector implementations that do and do not work.
Also available in: Russian  
06 Mar 2014
Integrate InfoSphere Streams with InfoSphere Data Explorer
Learn how to integrate IBM InfoSphere Streams with InfoSphere Data Explorer to enable Streams operators to connect to Data Explorer to insert and update records. The article focuses on InfoSphere Streams 3.0 or higher and InfoSphere Data Explorer 8.2.2 or 8.2.3.
04 Mar 2014
Optimizing BDFS jobs using InfoSphere DataStage Balanced Optimization
This article explains how to use InfoSphere DataStage Balanced Optimization to rewrite Big Data File Stage jobs into Jaql. BDFS stage operates on InfoSphere BigInsights. BDFS stage is a new stage introduced in InfoSphere Information Server 9.1. To optimize performance of BDFS jobs, InfoSphere DataStage Balanced Optimization is required. It redesigns a job to maximize performance by minimizing the amount of input and output performed, and by balancing the processing against source, intermediate, and target environments. Readers will learn how to use InfoSphere DataStage Balanced Optimization in association with BDFS stage and configuration parameters required for Jaql Connector when a job is optimized.
20 Feb 2014
Applying IBM InfoSphere Information Analyzer rules for operational quality measurements
A key challenge in the day to day management of any solution is in measuring whether the solution components are meeting IT and business expectations. Given these requirements, it becomes incumbent upon IT to build processes where performance against these objectives can be tracked. By employing such measurements, IT can then take action whenever thresholds for expected operational behavior are being exceeded. Assessing and monitoring operational quality on information integration processes requires establishing rules that are meaningful in relation to the existing operational metadata. Rather than start off with a blank slate, this article demonstrates how to use pre-built rule definitions from IBM's InfoSphere Information Analyzer to get under way in tracking the operational quality of IBM InfoSphere Information Server's data integration processing.
09 Jan 2014
What's new in InfoSphere Workload Replay for DB2 for z/OS v2.1
InfoSphere Optim Workload Replay for DB2 for z/OS (Workload Replay) extends traditional database test coverage. Now you can capture production workloads and replay them in your test environment without the need to set up a complex client and middle-ware infrastructure. In October 2013, version 2.1 of Workload Replay was released, with key enhancements that we describe in this article.
19 Dec 2013
Process big data with Big SQL in InfoSphere BigInsights
SQL is a practical querying language, but is has limitations. Big SQL enables you to run complex queries on non-tabular data and query it with an SQL-like language. The difference with Big SQL is that you are accessing data that may be non-tabular, and may in fact not be based upon a typical SQL database structure. Using Big SQL, you can import and process large volume data sets, including by taking the processed output of other processing jobs within InfoSphere BigInsights to turn that information into easily query-able data. In this article, we look at how you can replace your existing infrastructure and queries with Big SQL, and how to take more complex queries and convert them to make use of your Big SQL environment.
Also available in: Russian  
03 Dec 2013
InfoSphere Data Architect: Best practices for modeling and model management
This article explains how to create models with better maintainability and to change the models in a team environment using best practices. The article also details the product settings that you should set for the product to work at optimal level with the given resources.
14 Nov 2013
InfoSphere Guardium data security and protection for MongoDB, Part 1: Overview of the solution and data security recommendations
This article series describes how to monitor and protect MongoDB data using IBM InfoSphere Guardium. Part 1 describes an overview of the solution, the architecture, and the benefits of using InfoSphere Guardium with MongoDB. The value of the fast growing class of NoSQL databases such as MongoDB is the ability to handle high velocity and volumes of data while enabling greater agility with dynamic schemas. Many organizations are just getting started with MongoDB, and now is the time to build security into the environment to save time, prevent breaches, and avoid compliance violations. This article series describes configuration of the solution, sample monitoring use cases, and additional capabilities such as quick search of audit data and building a compliance workflow using an audit process.
Also available in: Chinese   Portuguese  
30 Oct 2013
Using the MDM Application Toolkit to build MDM-centric business processes, Part 2: Performing CRUD operations against MDM using the Application Toolkit
This is the second in a series of articles that describe how to create process applications for master data by using IBM Business Process Manager (BPM). Specifically, this series refers to the InfoSphere Master Data Management (MDM) Application Toolkit and IBM BPM 8.0.1, both of which are provided with InfoSphere MDM 11.0. This article focuses on extending the Hello World scenario to perform a full set of create, retrieve, update, and delete (CRUD) operations against an MDM operational server. The article shows you how to create human services quickly and simply within BPM to drive operations on the MDM server. I start by constructing a create process, and then proceed with update, delete, and retrieve. This article focuses on the speed and simplicity with which you can create CRUD business processes using the MDM Application Toolkit. While more advanced interactions are possible, they are deferred for later articles in the series.
24 Oct 2013
Using InfoSphere Streams with memcached and Redis
InfoSphere Streams is a powerful middleware product that provides a platform for development of streaming applications and their execution in a fast and distributed manner. In a stream processing system, there is often a need to externalize the application-related state information and share it with other distributed application components. It is possible to do distributed data sharing within the context of a Streams application. This is achieved through the use of the Distributed Process Store (dps) toolkit that provides a key-value store abstraction. The shared state can be stored in memcached or Redis -- two popular open-source distributed state management systems.
22 Oct 2013
Guaranteed delivery with InfoSphere DataStage
This article describes how you can use the InfoSphere DataStage Distributed Transaction Stage to guarantee delivery of data. It also addresses the use of local transactions within DataStage database stages. Finally, it describes how the Change Data Capture Transaction stage works with InfoSphere Data Replication to guarantee the delivery of changes to a target database.
Also available in: Chinese  
17 Oct 2013
Develop custom KPIs using the Policy Monitoring JobFramework
This article discusses the basic structure of the JobFramework and its application to the definition of a custom KPI, using the Latency KPI as an example. The Latency KPI calculates the time that is required to propagate data changes from the data sources to the operational server, which is an important characteristic of the data consistency and trustworthiness. This article also describes how to navigate the new Latency KPI reports using IBM Cognos Business Intelligence Server.
Also available in: Chinese  
10 Oct 2013
System log analysis using InfoSphere BigInsights and IBM Accelerator for Machine Data Analytics
When understood, logs are a goldmine for debugging, performance analysis, root-cause analysis, and system health assessment. In this real business case, see how InfoSphere BigInsights and the IBM Accelerator for Machine Data Analytics are used to analyze system logs to help determine root causes of performance issues, and to define an action plan to solve problems and keep the project on track.
01 Oct 2013
Working with Big SQL extended and complex data types
Big SQL, a SQL interface introduced in InfoSphere BigInsights, offers many useful extended data types. In general, a data type defines the set of properties for values being represented and these properties dictate how the values are treated. Big SQL supports a rich set of data types, including extended data types that are not supported by Apache Hive. With data types supported by Big SQL, it's easier to represent and process semi-structured data. Using the code samples and queries included, learn how to use Big SQL complex data types in simple and nested form and how to create and implement these types in an application. As an added bones, see how to use the Serializer Deserializer (SerDe) to work with JSON data.
Also available in: Chinese   Russian  
24 Sep 2013
Get to know the R-project Toolkit in InfoSphere Streams
InfoSphere Streams addresses a crucial emerging need for platforms and architectures that can process vast amounts of generated streaming data in real time. The R language is popular and widely used among statisticians and data miners for developing statistical software for data manipulation, statistical computations, and graphical displays. Learn about the InfoSphere Streams R-project Toolkit that integrates with the powerful R suite of software facilities and packages.
Also available in: Chinese   Russian  
17 Sep 2013
Getting started with real-time stream computing
Use InfoSphere Streams to turn volumes of data into information that helps predict trends, gain competitive advantage, gauge customer sentiment, monitor energy consumption, and more. InfoSphere Streams acts on data in motion for real-time analytics. Get familiar with the product and find out where to go for tips and tricks that speed implementation.
Also available in: Chinese   Russian  
10 Sep 2013
Do I need to learn R?
R is a flexible programming language designed to facilitate exploratory data analysis, classical statistical tests, and high-level graphics. With its rich and ever-expanding library of packages, R is on the leading edge of development in statistics, data analytics, and data mining. R has proven itself a useful tool within the growing field of big data and has been integrated into several commercial packages, such as IBM SPSS and InfoSphere, as well as Mathematica. This article offers a statistician's perspective on the value of R.
Also available in: Chinese  
03 Sep 2013
Managing your InfoSphere Streams cluster with IBM Platform Computing
Today, the challenge for many organizations is extracting value from the imposing volumes of data available to them. Tackling the big data challenge can fundamentally improve how an organization does business and makes decisions. But managing your big data infrastructure doesn't have to be challenging. With the appropriate management strategy and tools, multiple large environments can be set up and managed efficiently and effectively. This article describes how to use IBM Platform Computing to set up and manage IBM InfoSphere Streams environments that will analyze big data in real time.
20 Aug 2013
Best practices for using InfoSphere Federation Server to integrate web service data sources
This article introduces the overall architecture of the Web services wrapper of the IBM InfoSphere Federation Server. It explains how to integrate data from web service providers by web service nicknames step-by-step. This article also introduces some of the restrictions of the Web services wrapper.
Also available in: Chinese  
15 Aug 2013
Integrating MDM Server with Enterprise Information Systems using SAP as an example, Part 2: Enriching customer records with SAP specific information
This article is a sequel to the "Integrating MDM Server with Enterprise Information Systems using SAP as an example" tutorial. In Part 1, the focus was on propagating customer master data from an MDM Server to an SAP system. Part 2 focuses on how additional information added to the records in the SAP system can be sent to an MDM Server. In the demonstrated scenario, the customer record previously propagated to the SAP system is enriched with a SAP-generated tax identifier. The SAP Intermediate Document (IDoc) mechanism is used to send the modified customer record from the SAP system to the Enterprise Service Bus (ESB). On the ESB side, the WebSphere Adapter for SAP Applications is used to pick up the IDoc and feed its data into a mediation flow. Based upon the provided data, the mediation flow creates the MDM Server web service requests updating the customer record with the tax ID.
25 Jul 2013
Analyzing large datasets with Hive
Every 24 hours, the big data industry gathers and logs terabytes of data. There is a growing need to make sense of this ballooning amount of data. From C-level executives down to engineers, the challenge is to base forecasts and make decisions derived from this information. This article shows you how to analyze these big datasets using Apache Hive, a data warehouse built for data-intensive distributed applications.
Also available in: Russian  
23 Jul 2013
Develop an Android application with the InfoSphere Business Glossary REST API
IBM InfoSphere Business Glossary enables users to create, manage, and share an enterprise vocabulary and classification system. InfoSphere Business Glossary includes a REST API that makes glossary content easier to consume by enabling the development of custom applications based on particular needs. The API has been updated with every subsequent release of InfoSphere Business Glossary. This article provides step-by-step instructions on how to develop a dynamic Android application using the IBM InfoSphere Business Glossary REST API. The application enables users to find terms, examine the term's details and contact the steward using the native phone and email applications on the Android device. The goal is for InfoSphere Business Glossary customers to use the knowledge gained through building this sample application as inspiration for using the REST API to create their own custom applications.
Also available in: Chinese   Russian   Vietnamese  
18 Jul 2013
InfoSphere MDM for master data governance with MDM workflow
This article explains the important role that master data management (MDM) workflow plays when considering master data governance within an MDM implementation.
Also available in: Chinese  
18 Jul 2013
Leverage the benefits of enterprise Hadoop
MapReduce implementations are the technology of choice for enterprises that want to analyze big data at rest. Businesses have a choice between purely open source MapReduce implementations -- most notably, Apache Hadoop -- and commercial implementations. Here, the authors argue the case that enterprise requirements are better served by Hadoop-based products, such as InfoSphere BigInsights than they are by "vanilla" Hadoop.
Also available in: Chinese   Russian  
16 Jul 2013
Big data security and auditing with IBM InfoSphere Guardium
In this article, you will learn how InfoSphere Guardium provides database activity monitoring and auditing capabilities that enable you to seamlessly integrate Hadoop data protection into your existing enterprise data security strategy. You will learn how to configure the system and to use InfoSphere Guardium security policies and reports tailored specifically for Hadoop environments, including IBM InfoSphere BigInsights, Cloudera, Hortonworks Data Platform, and Greenplum Hadoop. You will also learn about a quick start monitoring implementation available only with IBM InfoSphere BigInsights.
Also available in: Chinese   Japanese   Portuguese   Spanish  
11 Jul 2013
Advanced redaction for better document workflow
IBM InfoSphere Guardium Data Redaction removes sensitive data from documents that enterprises share across departments or with the public. The product supports various document formats. This article describes several features of InfoSphere Guardium Data Redaction along with configuration and programming tips, starting with XML document redaction.
Also available in: Chinese  
11 Jul 2013
Big data in the cloud
Big data is an inherent feature of the cloud and provides unprecedented opportunities to use both traditional, structured database information and business analytics with social networking, sensor network data, and far less structured multimedia. Big data applications require a data-centric compute architecture, and many solutions include cloud-based APIs to interface with advanced columnar searches, machine learning algorithms, and advanced analytics such as computer vision, video analytics, and visualization tools. This article examines the use of the R language and similar tools for big data analysis and methods to scale big data services in the cloud. It provides an in-depth look at digital photo management as a simple big data service that employs key elements of search, analytics, and machine learning applied to unstructured data.
Also available in: Chinese   Russian   Vietnamese   Portuguese   Spanish  
09 Jul 2013
Extending the Basic Initial Load in InfoSphere Master Data Management Advanced Edition
The Master Data Management (MDM) Advanced Edition (AE) Basic Initial Load (BIL) DataStage assets can be successfully extended. The BIL DataStage project presents a patterned and disciplined approach to the initial loading of data. This article highlights, in an easy-to-follow fashion, how to utilize the existing structures to successfully add additional tables to the types of data that BIL currently loads into the MDM data base.
Also available in: Chinese  
03 Jul 2013
Implement ISO 20022 payment initiation messages in a payment processing solution
While working on a customer site we discovered a requirement to use ISO 20022 standards for inbound and outbound messages in a payment processing solution where the data required by the system did not fully conform to the defined standard. This article provides best practices and use cases for implementing ISO standards into solutions as well as outline design considerations for the implementation. The article explains how to identify and apply different types of payment messages based on the payment type and the message definition and how to use the ISO metadata to define the data attributes and data flow.
Also available in: Chinese  
03 Jul 2013
Optimize DB2 10.5 for Linux, UNIX, and Windows performance using InfoSphere Optim Query Workload Tuner with the DB2 BLU accelerator feature
The new BLU Acceleration feature in IBM DB2 10.5 for Linux, UNIX, and Windows can help improve the performance of your workload by converting row-organized tables to column-organized tables. However, the challenge is to know what tables could be converted and how much performance would improve. In this step-by-step article, you will learn how to use Optim Query Workload Tuner to perform column organization conversion, analyze what-ifs, and improve the performance of your workload.
27 Jun 2013
InfoSphere Guardium data security and protection for MongoDB, Part 3: Reporting, automating, and blocking
This article series describes how to monitor and protect MongoDB data using IBM InfoSphere Guardium, including the configuration of the solution, sample monitoring use cases, and additional capabilities such as quick search of audit data and building a compliance workflow using an audit process. Part 3 describes the capabilities in InfoSphere Guardium to search and report on audit data, how to create an audit process, and how to block users (available only with the Advanced Monitor). Many organizations are just getting started with MongoDB, and now is the time to build security into the environment to save time, prevent breaches, and avoid compliance violations.
Also available in: Chinese  
27 Jun 2013
Build a data warehouse with Hive
The data warehouse has been an ongoing battle among organizations for years. How do you build it? What data can you integrate? Should you use Kimball or Inmon, corporate information factory (CIF), or data marts? The list could go on for days -- decades, even. With big data, the questions become far more complicated, such as is a data warehouse enough? The answer lies in the enterprise. People claim that Hive is the data warehouse of Hadoop. Although true on one level, it's also something of a false claim. Sometimes, however, you have to use the tools available to you, and for that, Hive can be a data warehouse.
Also available in: Chinese   Russian   Vietnamese  
25 Jun 2013
What's the big deal about Big SQL?
If you specialize in relational database management technology, you've probably heard a lot about "big data" and the open source Apache Hadoop project. Perhaps you've also heard about IBM's new Big SQL technology, which enables InfoSphere BigInsights users to query Hadoop data using industry-standard SQL. Curious? This article introduces you to Big SQL, answering many of the common questions that relational DBMS users have about this IBM technology.
Also available in: Chinese   Russian  
14 Jun 2013
Get started with BigInsights
Learn how to manage your big data environment, import data for analysis, analyze data with BigSheets, develop your first big data application, develop Big SQL queries to analyze big data, and create an extractor to derive insights from text documents in this comprehensive set of learning guides.
14 Jun 2013
Synchronize data with control signals in the InfoSphere Streams Time Series Toolkit
InfoSphere Streams is the real-time component of the IBM big data platform. It provides the platform and toolkits for building real-time analytical solutions. The Time Series Toolkit included with Streams includes operators for preprocessing, analyzing, and modeling time series data in real time. Modeling operators in the toolkit use incoming time series data to build an internal model for forecasting or tracking. In real-world scenarios, the incoming data used for model building might become noisy and should be discarded from the model-building process. Also, once the incoming data is clean, the model might have to be retrained. This article provides a solution for this and describes how to synchronize and calibrate the process of model building and operator functions with the quality of incoming data using the control port feature.
Also available in: Chinese  
11 Jun 2013
Best practices for IBM InfoSphere Blueprint Director, Part 3: Sharing Information Architectures through InfoSphere Blueprint Director
This article provides best practices on publishing information architecture blueprints using IBM InfoSphere Blueprint Director. Publishing architecture blueprints enables sharing of the most current solution architecture with all team members allowing everyone to experience the same project vision.
06 Jun 2013
Integrating PureData System for Analytics with InfoSphere Streams
This article describes how to perform bulk load from InfoSphere Streams 2.0 to PureData System for Analytics N1001-010 using Netezza technology. The example InfoSphere Streams application demonstrates how Netezza enables a high-throughput connection and allows both systems working together to reach the high throughput that they can offer separately.
Also available in: Chinese  
04 Jun 2013
A complete connectivity guide from InfoSphere Information Server to DB2 for i
IBM Information Server supports extracting from and writing to DB2 for System i. To help you overcome any challenges in setting up the connection from Information Server to DB2 for i, this article provides clear, step-by-step instructions, from checking prerequisite information and components to connecting to DB2 for i and defining DataStage jobs.
Also available in: Vietnamese  
30 May 2013
Schema replication with IBM InfoSphere Data Replication, Part 2: Bi-directional schema subscriptions in DB2 for Linux, UNIX, and Windows 10.1
IBM InfoSphere Data Replication allows the synchronization of data between two or more database management systems either on the same or on different operational platforms. Many different usage scenarios exist. Starting in Version 10.1 of IBM InfoSphere Data Replication, replication is now supported at the schema level. This means that defined changes to the database structure such as the creation of new tables are automatically added to the replication system without any need for administration or intervention. This not only eliminates or greatly reduces the administrative efforts when tables are added or changed at the primary database but also significantly increases the reliability of the replication system, especially when used as a synchronization mechanism for an active disaster recovery site. This article is the second in a series which uses a disaster recovery use case to explain how to set up schema-level subscriptions for bi-directional replication topologies available in the Q Replication technology that comes with IBM InfoSphere Data Replication v10.1.3 for Linux, UNIX, and Windows. We encourage the user to replay the scenario. To make this as convenient as possible, we provide various scripts in the Download section of the article. Watch out for more articles in this series that cover schema-level subscriptions for uni-directional topologies and more.
30 May 2013
Extract data from Excel sources in IBM InfoSphere Information Server using DataStage Java Integration Stage and Java Pack
Explore the functions of the Java Integration Stage (DataStage Connector) introduced in IBM InfoSphere Information Server version 9.1. This article addresses the Excel data source connectivity problem in older releases of IBM InfoSphere Information Server (7.5.x, 8.0.x, 8.1.x, 8.5.x, and 8.7.x) using Java Pack Plug-ins and the Java Pack API. The older releases of Information Server do not have a dedicated component for Excel connectivity. Third-party ODBC to ODBC bridges were used as an alternative, but they are licensed. The Java Pack Stages coupled with the Java Pack API and Apache POI API can be used to fetch Excel data into DataStage in a cost-effective way.
Also available in: Chinese  
23 May 2013
Build a data library with Hive
Storing massive amounts of data is great until you need to do something with it. No incredible discoveries or futuristic predictions come from unused data, no matter how much of it you store. Big data can be a complicated beast. Writing complex MapReduce programs in the Java programming language takes time, good resources, and know-how that most organizations don't have available. This is where building a data library using a tool like Hive on top of Hadoop becomes a powerful solution.
Also available in: Chinese   Russian  
21 May 2013
Performance characteristics of IBM InfoSphere Information Server 8.7 running on VMware vSphere 5.0 on Intel Xeon Processors
With the continuous growth of processing power and the increasing number of processor cores in servers, virtualization has become very attractive for many companies. As more and more IBM InfoSphere Information Server customers begin to adopt virtualization technology, understanding the performance characteristics of InfoSphere Information Server running in virtualized environments has become critical.
17 May 2013
Schema replication with IBM InfoSphere Data Replication, Part 1: Uni-directional schema subscriptions in DB2 for Linux, UNIX, and Windows 10.1
IBM InfoSphere Data Replication allows the synchronization of data between two or more database management systems either on the same or on different operational platforms. Many different usage scenarios exist. Starting in Version 10.1 of IBM InfoSphere Data Replication, replication is now supported at the schema level. This means that defined changes to the database structure such as the creation of new tables are automatically added to the replication system without any need for administration or intervention. This not only eliminates or greatly reduces the administrative efforts when tables are added or changed at the primary database but also significantly increases the reliability of the replication system, especially when used as a synchronization mechanism for an active disaster recovery site. This article is the first in a series that uses a disaster recovery use case to explain how to set up schema-level subscriptions for uni-directional replication topologies available in the Q Replication technology that comes with IBM InfoSphere Data Replication v10.1.3 for Linux, UNIX, and Windows. We encourage you to replay the scenario. To make this as convenient as possible, various scripts are provided in the Download section of this article. Watch out for more articles in this series that cover schema-level subscriptions for bi-directional topologies and more.
Also available in: Chinese  
16 May 2013
Analyze text from social media sites with InfoSphere BigInsights
Learn how you can start creating, testing, deploying, and using custom text extractors to analyze social media data and other forms of text data using technologies available in IBM's big data platform.
Also available in: Russian   Portuguese   Spanish  
14 May 2013
Integrating InfoSphere Streams 3.0 applications with InfoSphere Information Server 9.1 DataStage jobs, Part 1: Connecting the Streams job and DataStage job
This article describes the integration architecture between InfoSphere Streams and InfoSphere DataStage, and it provides step-by-step instructions for connecting InfoSphere Streams 3.0 applications with InfoSphere DataStage 9.1 jobs using the InfoSphere Streams DataStage Integration toolkit for Streams and the InfoSphere DataStage Streams connector.
Also available in: Chinese  
09 May 2013
Modeling created global temporary tables for DB2 with InfoSphere Data Architect 8.5, Part 2: Modifying an existing data model
Business applications commonly need to reuse aggregated or processed data from a set of data sources over a single series of operations. An example for this could be generation of reports for analysis and decision making. In order to support this need, some database vendors have introduced support for temporary tables that can hold such aggregated or processed data. Beginning with IBM InfoSphere Data Architect version 8.5, you can model created global temporary tables (CGTTs) for DB2 for z/OS version 10 and DB2 for Linux, UNIX, and Windows version 9.7. This two-part series will demonstrate how you can accelerate data modeling with CGTTs with DB2 for z/OS version 10 and DB2 for Linux, UNIX, and Windows version 9.7 by adopting InfoSphere Data Architect 8.5.
02 May 2013
Modeling created global temporary tables for DB2 with InfoSphere Data Architect 8.5, Part 1: Getting started
Business applications commonly need to reuse aggregated or processed data from a set of data sources over a single series of operations. An example for this could be generation of reports for analysis and decision making. In order to support this need, some database vendors have introduced support for temporary tables that can hold such aggregated or processed data. Beginning with IBM InfoSphere Data Architect version 8.5, you can model created global temporary tables (CGTTs) for DB2 for z/OS version 10 and DB2 for Linux, UNIX, and Windows version 9.7. This two-part series will demonstrate how you can accelerate data modeling with CGTTs with DB2 for z/OS version 10 and DB2 for Linux, UNIX, and Windows version 9.7 by adopting InfoSphere Data Architect 8.5.
Also available in: Chinese  
25 Apr 2013
InfoSphere MDM Collaboration Server V10.0 design strategy and implementation, Part 2: A guide to designing and implementing solutions using IBM InfoSphere MDM Collaboration Server v10.0
In Part 1 of this series, a sample business case scenario illustrated the best approach for designing and creating technical specifications for an application using IBM InfoSphere Master Data Management Collaboration Server v10.0. Part 2 examines the implementation strategy and shows step-by-step how to build a robust application using InfoSphere Master Data Management. Read this article to gain an understanding of the basic considerations for implementing an application using MDM Collaboration Server.
24 Apr 2013
Developing a big data application for data exploration and discovery
Exploring big data and traditional enterprise data is a common requirement of many organizations. In this article, we outline an approach and guidelines for indexing big data managed by a Hadoop-based platform for use with a data discovery solution. Specifically, we describe how data stored in IBM's InfoSphere BigInsights (a Hadoop-based platform) can be pushed to InfoSphere Data Explorer, a sophisticated tool that enables business users to explore and combine data from multiple enterprise and external data sources.
Also available in: Chinese   Portuguese   Spanish  
23 Apr 2013
Accelerate the path to PCI DSS data compliance using InfoSphere Guardium
This article gives you a step-by-step overview of using the Payment Card Industry (PCI) Data Security Standard (DSS) accelerator that is included with the standard InfoSphere Guardium data security and protection solution. The PCI DSS is a set of technical and operational requirements designed to protect cardholder data and applies to all organizations who store, process, use, or transmit cardholder data. Failure to comply can mean loss of privileges, stiff fines, and, in the case of a data breach, severe loss of consumer confidence in your brand or services. The InfoSphere Guardium accelerator helps guide you through the process of complying with parts of the standard using predefined policies, reports, group definitions, and more.
18 Apr 2013
Calling Python code from IBM InfoSphere Streams
The Python programming language is a popular choice among enterprise developers to quickly put together working solutions. Many companies adopt Python to build IT assets for regular use. IBM InfoSphere Streams is a novel middleware product designed for implementing logic directly in C++ and Java. It is also possible to call Python code within the context of a Streams application. Learn how to call Python code directly from IBM InfoSphere Streams applications.
Also available in: Chinese   Russian  
15 Apr 2013
DataStage command line integration
Develop IBM InfoSphere DataStage jobs that can be called from a command line or shell script using UNIX pipes for more compact and efficient integration. This technique has the potential of saving storage space and bypassing landing intermediate files that eventually will feed an ETL job. This also reduces overall execution time and allows sharing the power of DataStage jobs through remote execution.
Also available in: Chinese  
04 Apr 2013
Tuning the Oracle Connector performance in IBM InfoSphere DataStage, Part 2: Optimization of bulk load operation and considerations for reject links and data types
The Oracle Connector is a connectivity component in IBM InfoSphere Information Server. It is utilized by IBM InfoSphere DataStage and other products in the Information Server suite to perform extract, lookup, load, and metadata import operations on Oracle databases. This article is Part 2 of a series of two articles that provide a set of guidelines for tuning the Oracle Connector stages in DataStage parallel jobs with the goal of maximizing their performance.
Also available in: Chinese  
04 Apr 2013
Tuning the Oracle Connector performance in IBM InfoSphere DataStage, Part 1: Overview of the connector tuning process and optimization of fetch, lookup and DML operations
The Oracle Connector is a connectivity component in IBM InfoSphere Information Server. It is utilized by IBM InfoSphere DataStage and other products in the Information Server suite to perform extract, lookup, load, and metadata import operations on Oracle databases. This article is Part 1 of a series of 2 articles that provide a set of guidelines for tuning the Oracle Connector stages in DataStage parallel jobs with the goal of maximizing their performance.
Also available in: Chinese  
28 Mar 2013
IBM InfoSphere DataStage job validation steps using IBM Optim Test Data Management Solution
Large numbers of extract, transform, and load (ETL) jobs need to be validated during the following two common job lifecycle scenarios: Migrating ETL projects/jobs from older version to a new version, or moving jobs from development to QA to production. Enterprises typically validate that their jobs running in a new version of the software or new hardware environment are producing same results as before, giving them confidence that the new system can replace the old system. In a similar way, before a job in the data integration process is deployed in the production environment, it must be shown to have the expected behavior in development, testing, and production environments. In this article, a step-by-step example is given on how DataStage users can use IBM InfoSphere Optim Test Data Management Solution to validate the results of ETL jobs.
Also available in: Chinese  
28 Mar 2013
Supply cloud-level data scalability with NoSQL databases
This article explores NoSQL databases, including an overview of the capabilities and features of NoSQL systems HBase, MongoDB, SimpleDB. It also covers cloud and NoSQL database design basics.
Also available in: Chinese   Russian  
25 Mar 2013
Enhancing IBM Enterprise Market Management solutions with master data
Having good data backing Enterprise Marketing Management (EMM) solutions reduces the number of duplicate, missing, and incorrect offers that you send to your customers and prospects. In this article, learn the design considerations and patterns for supplying master data to EMM solutions. The article also describes good practices when using IBM InfoSphere Master Data Management in your solution. After you have a single view of the master data for EMM, you can leverage this source-trusted data across your enterprise. Master data then becomes the foundation of your Smarter Analytics and Smarter Commerce solutions.
Also available in: Chinese  
14 Mar 2013
Designing an alerts mechanism on the IBM InfoSphere DataStage and QualityStage Operations Console Database, Part 1: Alerts for common scenarios
The IBM InfoSphere DataStage and QualityStage Operations Console is a web-based application that is used to monitor the DataStage engine components in real time. It consists of a back-end database, DataStage Operations Database, where all the information about job runs and system resources is stored. This article describes an approach on how to construct email alerts on the operations database using database triggers. Also, you will learn how to use the data that is available in the operations database tables and how these tables get updated.
28 Feb 2013
Using InfoSphere Streams with Informix
Learn you how to connect and use Informix as a data source or a data target with InfoSphere Streams. It covers the use of both the Informix-specific protocol and the more general IBM Common Driver protocol that is used by several IBM database products. After reading this article, a reader will be able to use Informix in a Streams environment.
Also available in: Chinese  
28 Feb 2013
IBM Accelerator for Machine Data Analytics, Part 4: Speeding the up-and-running experience for a variety of data
Machine logs from diverse sources are generated in an enterprise in voluminous quantities. IBM Accelerator for Machine Data Analytics simplifies the task of implementation required so analysis of semi-structured, unstructured or structured textual data is accelerated. In this article, the fourth in a series, learn step-by-step how to use the web or Eclipse tooling in IBM InfoSphere BigInsights to more quickly get up-and-running with IBM Accelerator for Machine Data Analytics.
Also available in: Chinese  
28 Feb 2013
Implementing Windows desktop single sign-on for InfoSphere Business Glossary
IBM InfoSphere Business Glossary 9.1 uses the Simple and Protected GSS-API Notification (SPNEGO) support provided by WebSphere Application Server to enable configuration of a seamless single sign-on environment for InfoSphere Business Glossary and InfoSphere Business Glossary Anywhere users. Deploying this feature requires correct configuration of several interlocking components, including some synchronization details that might not be immediately apparent. This article provides a step-by-step walkthrough of the configuration for a scenario that includes using a Microsoft Active Directory domain controller for the external user registry, Windows Server 2008 on the WebSphere Application Server tier and for the domain controller, and Windows 7 on the client tier. The article also includes troubleshooting tips and references for some common pitfalls.
Also available in: Chinese  
21 Feb 2013
Use data-level security for granular access control of auditing results in InfoSphere Guardium
InfoSphere Guardium offers enterprise-wide data activity monitoring for data protection and auditing. Two critical elements to consider for a successful enterprise implementation of InfoSphere Guardium for enterprise-wide data protection and audit include support for separation of duties, and enterprise deployment capabilities that eliminate redundant configurations and streamline enterprise deployments to match your organizational structures. By using Guardium data-level security mechanisms, administrators can assign responsibilities for particular databases or systems to individuals (or groups) which align with their hierarchical organizational structure. This article describes the benefits of data-level security as well as step-by-step instructions for implementing the solution for a sample scenario.
21 Feb 2013
Customize the InfoSphere Master Data Management classic party search capability
In this tutorial, use the data extension and pre-written SQL capabilities to create a custom search object and enable the use of the new party attributes as part of a classic party search service. You will use a scenario in which the search criteria to be added, as part of the pluggable SQL, are not available in the out-of-the box party search object. Therefore, additional Java development effort is required.
24 Jan 2013
Install IBM InfoSphere Guardium Data Encryption on the IBM PureApplication System
This article focuses on deploying the IBM InfoSphere Guardium data encryption software to provide data encryption for Red Hat Linux V6.2 IBM DB2 hosts. This software provides encryption for both regular files and DB2 files. InfoSphere Guardium also provides a DB2 agent for DB2 encrypted backup and restore operations.
Also available in: Chinese  
16 Jan 2013
Mask archived and extracted data directly into CSV, XML and ECM formats using InfoSphere Optim Data Masking Solution
Learn how file format types that are now supported by IBM InfoSphere Optim Data Masking Solution will help you to increase the interoperability of your data and support data privacy and security requirements. This article describes options to convert your data to XML, CSV, or ECM format types, and explains how various threshold limits can be applied. In addition, you'll learn how to apply data privacy modules for these file format types and how to manage LOB and XML data types during this process.
Also available in: Chinese  
10 Jan 2013
IBM Accelerator for Machine Data Analytics, Part 1: Speeding up machine data analysis
Machine logs from diverse sources are generated in an enterprise in voluminous quantities. IBM Accelerator for Machine Data Analytics simplifies the task of implementation required so analysis of semi-structured, unstructured or structured textual data is accelerated.
Also available in: Chinese   Japanese  
03 Jan 2013
Cognos Business Intelligence 10.2 reporting on InfoSphere BigInsights
Find guidance on consuming IBM InfoSphere BigInsights data through IBM Cognos Business Intelligence 10.2.
Also available in: Chinese   Chinese  
01 Jan 2013
Using InfoSphere MDM Collaboration Server Java APIs
IBM InfoSphere MDM Collaboration Server exposes several interfaces through which the system can be customized - InfoSphere MDM Collaboration Server scripting, and InfoSphere MDM Collaboration Server Java APIs being the major interfaces. This article provides the details of developing applications using InfoSphere MDM Server Collaboration Server Java APIs in the Eclipse Integrated Development Environment (IDE). Steps required to set up the environment, create the Java API-based project, run the classes, and debug using Eclipse IDE are highlighted in this article. Sample code is provided to illustrate the steps used in the development.
13 Dec 2012
Real-time transliteration using InfoSphere Streams custom Java operator and ICU4J
With the ever growing importance of Internet monitoring and sentiment analysis, there is an immediate need for identifying patterns (performing text analytics) in big data. However, one of the challenges during this exercise is that countries can have multiple languages that creates a challenge for effectively running the text analytics, since rules are not available for all the languages. For example, in India, the official language of each state is different and data is available in both English and local languages. This article describes how to bring about consistency during the transliteration process, and to use IBM InfoSphere Streams to prepare linguistic data and apply text analytics or pattern recognition logic.
Also available in: Chinese  
13 Dec 2012
Data mining techniques
Examine different data mining and analytics techniques and solutions. Learn how to build them using existing software and installations.
Also available in: Russian   Vietnamese  
11 Dec 2012
Big Data, Fractal Geometry, and Pervasively Parallel Processing
Big data platforms must push the nanoscopic frontier of scalability. It’s useful to think of a consolidated big data platform, architecturally, as a fractal structure. What that means is that big data must be self-similar on all platform scales — from macro to nano — and leverage the full parallel processing resources that are available at each level.
04 Dec 2012
Policy monitoring reports security setup with InfoSphere Master Data Management and Tivoli Directory Server
The Policy Monitoring component is introduced in IBM's InfoSphere Master Data Management (MDM) v10.1 release. Using IBM Cognos Business Intelligence reporting tools, Policy Monitoring enables organizations to report on data quality by using aggregated metrics and to establish policies for compliance with data quality thresholds. This tutorial provides detailed steps to set up a basic security model in IBM Cognos Business Intelligence for providing authentication and authorization for Policy Monitoring reports.
Also available in: Portuguese  
29 Nov 2012
Big data business intelligence analytics
Learn about integrating business intelligence and big data analytics. Explore the similarities, differences, and what choices to consider.
Also available in: Russian   Vietnamese   Spanish  
20 Nov 2012
Compare IBM data masking solutions: InfoSphere Optim and DataStage
Many organizations use production data to populate their test environments. The problem with this is that if there is sensitive data in your production environment, then you are exposing that data to software developers and testers. IBM offers the following two solutions to solve this problem: The InfoSphere Optim Data Masking option for Test Data Management, and the InfoSphere DataStage Pack for Data Masking. Both mask data and depersonalize it while still maintaining it's realism. This article explores the common functions that both solutions have that are requirements for effective data masking, and then it explores the differences between the products. After reading this article you should be able to pick the right IBM data masking solution to best meet your requirements.
Also available in: Portuguese   Spanish  
15 Nov 2012
Create customer segmentation models in SPSS Statistics from spreadsheets
Learn how to bring a spreadsheet of raw data into SPSS Statistics and apply two classification algorithms to create customer segmentation models. Then, use options in SPSS Statistics to create persistent files that contain the rules for the models that can be used for both deployment of customer classifications back to spreadsheets and into a big data environment.
Also available in: Russian   Spanish  
13 Nov 2012
Use IBM InfoSphere Information Server to transform legacy data into information services
Learn how to create and deploy information Sservices to access legacy databases without writing any code. The generated Web services are created using the IBM Information Server components including InfoSphere DataStage, InfoSphere Federation Server, InfoSphere Information Services Director, and WebSphere Transformation Extender for DataStage. In this example, the information services are delivered using a standard government XML model (GJXDM).
12 Nov 2012
Use IBM InfoSphere Information Server to transform legacy data into information services
Learn how to create and deploy information Sservices to access legacy databases without writing any code. The generated Web services are created using the IBM Information Server components including InfoSphere DataStage, InfoSphere Federation Server, InfoSphere Information Services Director, and WebSphere Transformation Extender for DataStage. In this example, the information services are delivered using a standard government XML model (GJXDM).
12 Nov 2012
Use IBM InfoSphere Optim Query Workload Tuner 3.1.1 to tune statements in DB2 for Linux, UNIX, and Windows, and DB2 for z/OS that reference session tables
IBM InfoSphere Optim Query Workload Tuner (OQWT) 3.1.1 can tune statements for IBM DB2 for Linux, UNIX, and Windows, and IBM DB2 for z/OS. This document describes how to use OQWT to tune a statement that accesses one or more session tables. Two methods are presented on how to set up the database environment for the session table such that OQWT 3.1.1 can tune statements using the table. Examples are provided for a script that is required to set up the environment, including example snapshots of the output and functionality of the applicable OQWT tuning features.
08 Nov 2012
Use InfoSphere Guardium Universal Feed to create a customized data activity monitoring solution, Part 2: Creating a feed for a user-defined data source
New databases and new applications are continually being created and adopted to meet specific organizational needs. The requirement for data protection and auditing capabilities is required by mandate and is more critical than ever. The InfoSphere Guardium data protection solution is extensible to enable the integration of a wide variety of new databases and sources into its platform, thereby providing a consistent enterprise-wide monitoring solution. In this article series, you learn how to integrate event logs from any software into InfoSphere Guardium using Guardium Universal Feed. Part 1 described how to create a feed for a relational data source. In this article you will learn how to create a feed for any arbitrary data source. You will learn how to upload event descriptions into the Guardium repository and how to use the reporting capability for event data. This article includes a sample event description file and sample program.
Also available in: Portuguese  
08 Nov 2012
Best practices for IBM InfoSphere Blueprint Director, Part 2: Designing information blueprints from the ground up
This article provides best practices using InfoSphere Blueprint Director. Understanding and applying these best practices enables you to create brand new architecture blueprints that are effective and actionable. It is intended for experienced users of InfoSphere Blueprint Director.
Also available in: Portuguese  
01 Nov 2012
InfoSphere Classic products: Troubleshooting and repairing badly formed data
This article provides practical concepts, tips, and examples to help you troubleshoot and fix problems with badly formed data on the mainframe, enabling your Classic data server to convert legacy data seamlessly to the relational SQL formats that your business requires.
29 Oct 2012
An efficient change data capture using IBM InfoSphere CDC Transaction Stage
This article describes a solution that is based on integration of the IBM InfoSphere DataStage with IBM InfoSphere Change Data Capture that ensures you capture the changed operational data and transmit it in real-time to the data warehouse, keeping it in an updated state at every moment. This enables you to retrieve current information for intelligent decision making, providing substantial business value, guaranteed data delivery, cost efficiency, and improved speed.
Also available in: Chinese  
25 Oct 2012

1 - 100 of 345 results | Next Show Summaries | Hide Summaries Subscribe to search results (RSS)