Technical library

  • spacer Filter by products, topics, and types of content

    (114 Products)

    (113 Topics)

    (15 Industries)

    (15 Types)

 

1 - 100 of 2550 results | Next Show Summaries | Hide Summaries Subscribe to search results (RSS)

View Results
Title none Date down
Feed InfoSphere Streams applications with live data from databases
This tutorial shows you how to connect IBM InfoSphere Data Replication Change Data Capture (CDC) as a near real-time data source for InfoSphere Streams. Walk through several integration options for a ready-to-use and custom user exit, then explore the pros and cons of each. Downloadable example sources for the user exit are provided and can be customized to fit your requirements.
04 Sep 2014
IBM Cognos Proven Practices: IBM Cognos TM1 FEEDERS
One of the more advanced concepts in the development of IBM Cognos TM1 cubes is the proper implementation of FEEDERS within TM1 rules. This document describes FEEDERS and how to use them effectively for improved performance when building IBM Cognos TM1 cubes.
Also available in: Russian   Spanish  
02 Sep 2014
Reduce the source footprint of CDC by querying databases for archive logs
Learn how the Change Data Capture (CDC) Oracle Redo physical standby configuration in InfoSphere Data Replication allows CDC to read logs off the standby device regardless of the log shipment method. This configuration reduces the source footprint by querying the standby database for archive log information.
21 Aug 2014
Build a data mining app using Java, Weka, and the Analytics Warehouse service
The Analytics Warehouse (formerly BLU Acceleration) service provides data warehousing and analytics as a service on IBM Bluemix. Developers can develop and deploy a heavy-duty analytic application using blazing-fast IBM BLU database technology offered in the cloud. Learn how to develop a data mining application using the Weka statistical analysis tool and leveraging the IBM BLU columnar database.
Also available in: Chinese   Japanese  
20 Aug 2014
Build a cache application in the cloud with Bluemix cache services
This tutorial talks about building caching applications on top of the IBM Cloud environment. The execution time recorded demonstrates the speed of accessing the caching service.
19 Aug 2014
Generate reports remotely and offline with InfoSphere Optim Performance Manager command-line utility
You can use predefined report features in the InfoSphere Optim Performance Manager for DB2 for Linux, UNIX and Windows web console to generate reports in HTML, PDF, PPT, and XLS formats. In this tutorial, learn how to extend these features with the command-line utility (CLU). With the CLU, you can generate reports in offline and batch mode without logging on to the web console, schedule reports to run at periodic intervals, and send reports as attachments to team members.
14 Aug 2014
Jumping through Hadoop: Stream Big Data video on a mobile app by integrating IBM Worklight with IBM InfoSphere BigInsights on IBM Bluemix
This article explains how you can integrate video data streaming with a mobile application. In doing so, it also highlights some of the business problems addressed and opportunities available by using several cutting-edge technologies: IBM InfoSphere BigInsights on IBM Bluemix and IBM Worklight. This is an example of a compelling trend that demands cloud service integration of big data to mobile devices for video streaming.
13 Aug 2014
Remove and reintegrate an auxiliary standby in an HADR setup
Starting with IBM DB2 10.1, the High Availability Disaster Recovery (HADR) feature supports multiple standbys. With multiple standbys, you can have your data in more than two sites for improved data protection with a single technology. This article provides detailed steps for removing and reintegrating the auxiliary standby in a multiple standby HADR setup.
13 Aug 2014
A brief comparative perspective on SQL access for Hadoop
Although Hadoop is often thought of as the one-size-fits-all solution for big data processing problems, the project is limited in its ability to manage large-scale graph processing, stream processing, and scalable processing of structured data. Learn about Big SQL, a massively parallel processing SQL engine that is optimized for processing large-scale structured data. See how it compares to other systems that were recently introduced to improve the efficiency of the Hadoop framework for processing large-scale structured data.
12 Aug 2014
Develop an iOS application with the InfoSphere Business Glossary REST API
InfoSphere Business Glossary enables you to create, manage, and share an enterprise vocabulary and classification system. InfoSphere Business Glossary includes a REST API that makes glossary content easier to consume by enabling the development of custom applications. In this article, step-by-step instructions show how to develop a dynamic iOS application using the InfoSphere Business Glossary REST API. The application lets users find business glossary assets, examine the asset's details, and contact the steward using the native phone and email applications on the iOS device. After building the sample application, you'll be on your way to using the REST API to create your own custom applications with InfoSphere Business Glossary.
07 Aug 2014
Gain confidence about data security in the cloud
This tutorial demystifies cloud security and arms you with the know-how to adopt the cloud with confidence. Learn how cloud security is a shared responsibility between the cloud service provider and the client. The responsibilities of each party are explored. Also walk through a data security use case involving IBM BLU Acceleration for the Cloud. When certain criteria are met, clients can achieve data security equal to or better than what they can achieve onsite.
Also available in: Chinese   Spanish  
07 Aug 2014
Develop a custom data migration tool for Reference Data Management
Migrating data from one IBM InfoSphere Master Data Management (MDM) Reference Data Management installation to another can be a challenging task. Using an IBM internal tool as an example, this article explores how to develop a custom solution for complex migration problems using Reference Data Management REST or MDM SOAP services. The solution is for all operating systems, application servers, and databases. The custom tool can migrate data from a lower version of Reference Data Management to a higher version, specifically from V10 to V11. Your custom tool can be designed to work well for future versions of Reference Data Management (assuming the service interfaces remain the same).
Also available in: English   Chinese  
31 Jul 2014
Increase scalability and failure resilience of applications with IBM Data Server Driver for JDBC and SQLJ
IBM DB2 pureScale on DB2 for Linux, UNIX, and Windows and Sysplex on DB2 for z/OS are IBM's premium database cluster options that offer unmatched database availability and scalability. This article explores the fundamentals and algorithms that govern the pureScale and Sysplex functions of the IBM Data Server Driver for JDBC and SQLJ. Applications can easily exploit these data-sharing technologies with minimal application or driver-related configuration effort. With increasing adoption of data-sharing technologies and the widespread use of JDBC by client applications, it is beneficial for application developers, users, and system administrators to understand, exploit, and manage the behavior of JDBC applications connecting to pureScale or Sysplex systems. This article discusses these technologies from a client standpoint.
31 Jul 2014
Desarrolle una herramienta personalizada de migración de datos para Reference Data Management
La migracion de datos desde una instalación de IBM InfoSphere Master Data Management (MDM) Reference Data Management a otra puede ser una tarea desafiante. Mediante el uso de una herramienta interna de IBM como un ejemplo, este artículo explora la forma de desarrollar una solución a la medida para problemas complejos de migración usando servicios Reference Data Management REST o MDM SOAP. La solución es para todos los sistemas operativos, servidores de aplicación y bases de datos. La herramienta a la medida puede migrar los datos desde la versión más baja de Reference Data Management a una versión más elevada, específicamente desde V10 hasta V11. Su herramienta a la medida puede diseñarse para operar bien en las versiones futuras de Reference Data Management (asumiendo que las interfaces de servicio sigan siendo las mismas).
31 Jul 2014
Processing and content analysis of various document types using MapReduce and InfoSphere BigInsights
Businesses often need to analyze large numbers of documents of various file types. Apache Tika is a free open source library that extracts text contents from a variety of document formats, such as Microsoft Word, RTF, and PDF. Learn how to run Tika in a MapReduce job within InfoSphere BigInsights to analyze a large set of binary documents in parallel. Explore how to optimize MapReduce for the analysis of a large number of smaller files. Learn to create a Jaql module that makes MapReduce technology available to non-Java programmers to run scalable MapReduce jobs to process, analyze, and convert data within Hadoop.
29 Jul 2014
Use the DB2 with BLU Acceleration Pattern to easily deploy a database
The database as a service (DBaaS) 1.1.0.8 component of the IBM PureApplication System introduced many new features. This article describes some of them, including deployment of DB2 with BLU, increasing the database resources, and backup. You also will learn how the DB2 with BLU Acceleration Pattern can make it easier and faster to create and deploy BLU-enabled datasets in DBaaS 1.1.0.8.
24 Jul 2014
DB2 monitoring enhancements for BLU Acceleration
BLU Acceleration is a collection of technologies for analytic queries that was introduced in IBM DB2 for Linux, UNIX and Windows (LUW) Version 10.5. BLU Acceleration can provide significant benefits in many areas including performance, storage savings, and overall time to value. This article provides an overview of the monitoring capabilities that support BLU Acceleration. These capabilities provide insight into the behavior of the database server and assist with tuning and problem determination activities. Extensive example queries help you start monitoring workloads that take advantage of BLU Acceleration.
24 Jul 2014
IBM InfoSphere Optim Data Growth: Setting up your first Archive
This article introduces the IBM InfoSphere Optim product. It explains its fundamentals elements, and shows step-by-step how to set up your first archive request; thus help technical audience in achieving comfort of getting on to product quickly.
21 Jul 2014
Deploy and explore the DB2 10.5 pureScale Feature with WebSphere Commerce V7
The IBM DB2 pureScale Feature for Advanced Enterprise Server Edition is designed for continuous availability and tolerance of both planned maintenance and unplanned accidental component failure. This article describes how to deploy the DB2 10.5 pureScale Feature with IBM WebSphere Commerce V7 for both new and existing WebSphere Commerce applications, including the instance setup and application configuration from the Admin Console of WebSphere Application Server.
17 Jul 2014
DB2 monitoring: Migrate from snapshot monitor interfaces to in-memory metrics monitor interfaces
This article helps you migrate from the snapshot monitor interfaces to the in-memory metrics monitor interfaces that were first introduced in DB2 for Linux, UNIX, and Windows Version 9.7.
17 Jul 2014
Build a simple web app for student math drills using the Bluemix SQLDB service
Learn how to create a Node.js application that relies on a managed database service, SQLDB, to handle the demanding web and transactional workloads for your application.
Also available in: Chinese   Japanese  
15 Jul 2014
Deploy InfoSphere MDM Collaborative Edition onto a cluster, Part 1: Strategies for mixed clustered topologies on an application server
This two-part tutorial shows how to set up a typical IBM InfoSphere Master Data Management (MDM) Collaborative Edition V11 clustered environment. This first part describes how to deploy InfoSphere MDM Collaborative Edition V11 onto a cluster. Follow along step-by-step with detailed examples of several clustering strategies. Also learn about the required configuration to enable the IBM HTTP Server for caching and load balancing the cluster nodes.
10 Jul 2014
Integrate data with SoftLayer Object Storage by using IBM InfoSphere DataStage
This article illustrates how to use IBM InfoSphere DataStage to integrate with SoftLayer Object Storage.
10 Jul 2014
Integrate data with Cloudant and CouchDB NoSQL database using IBM InfoSphere Information Server
In this article, learn how to use the Cloudant database and CouchDB with the IBM InfoSphere DataStage V11.3 Hierarchical stage (previously called XMLConnector). Detailed examples show how to invoke the Cloudant API, using the REST service of the Hierarchical Data stage, to access and change data on the Cloudant DB with support of HTTP basic authentication. You can modify the example to perform the same integration operations on a CouchDB database. The examples also use other steps of the Hierarchical stage to retrieve documents and parse or compose the retrieved documents into a relational structure.
10 Jul 2014
Create a business intelligence and analytics service in Ruby with Analytics Warehouse on IBM Bluemix
The Analytics Warehouse Service available in IBM Bluemix provides a powerful, easy-to-use, and agile platform for business intelligence and analytics. It is an enterprise-class managed service that is powered by the in-memory optimized, column-organized BLU Acceleration data warehouse technology. This article demonstrates how easy it is to incorporate the Analytics Warehouse service into your application so that you can focus on your application.
Also available in: Chinese   Japanese   Spanish  
10 Jul 2014
Integrate the Information Governance Catalog and IBM InfoSphere DataStage using REST
IBM InfoSphere DataStage is a data integration tool that lets you move and transform data between operational, transactional, and analytical target systems. In this article, learn to use InfoSphere DataStage to integrate with the REST resources of the Information Governance Catalog Glossary. A sample use case shows how to design a DataStage job to take advantage of the Hierarchical Data stage to access and author the Information Governance Catalog Glossary content. The REST step is a new capability of the Hierarchical Data stage (previously called XML Connector in InfoSphere Information Server) to invoke the REST Web services with support for different authentication mechanisms and Secure Socket Layer (SSL). Learn to parse the response body of the REST call and apply transformations on the data.
03 Jul 2014
Explore the eXtreme Scale-based caching service options in IBM PureApplication System
Caching services are a popular solution to address performance and scalability issues for cloud enterprise applications. Explore three caching options available with the IBM PureApplication System cloud system: One built-in, one based on WebSphere eXtreme Scale that uses a virtual system pattern on a cluster, and one based on eXtreme Scale that uses a VSP with a core OS image.
28 Jun 2014
Using the MDM Application Toolkit to build MDM-centric business processes, Part 4: Work with MDM integration services
This is the fourth article in a series that describes how to create process applications for master data by using IBM Business Process Manager (BPM). Specifically, this series refers to the IBM InfoSphere Master Data Management (MDM) application toolkit and IBM BPM 8.0.1, both of which are provided with InfoSphere MDM 11.0. This article explores the BPM integration services provided with the Application Toolkit. Learn how these services can help you to create workflows easily that integrate with InfoSphere MDM.
26 Jun 2014
Prepare the server environment to integrate IBM DataStage and InfoSphere Data Replication CDC
IBM InfoSphere Data Replication (IIDR) Change Data Capture (CDC) offers direct integration with InfoSphere DataStage by using a transaction stage job. In this article, follow the step-by-step process that outlines all the required tasks to prepare the DataStage environment to enable IIDR CDC integration.
19 Jun 2014
Simplifying database administration by using Administration Console
This video introduces the main features of Administration Console and shows you how to use it to identify and investigate database problems before they impact your business.
19 Jun 2014
Increase DB2 availability
This article demonstrates how to use a cloud provider to create a reliable tie-breaking method to avoid a split brain scenario. The procedure is for two node clusters running IBM DB2 for Linux, UNIX and Windows (LUW) and the integrated high availability (HA) infrastructure. See how to automate any two-node failover for DB2 LUW 10.1 or higher.
19 Jun 2014
Integrate DB2 for z/OS with InfoSphere BigInsights, Part 1: Set up the InfoSphere BigInsights connector for DB2 for z/OS
Learn how to set up integration between IBM DB2 11 for z/OS and IBM InfoSphere BigInsights. Enable access to structured and non-structured data that is stored in the Hadoop Distributed File System and send the results back to DB2, where the data can be integrated with online transactional data. Using a scenario that is common to all DB2 for z/OS users, learn to create a big data solution that uses the user-defined functions JAQL_SUBMIT and HDFS_READ to run jobs on InfoSphere BigInsights and retrieve the results with SQL.
17 Jun 2014
Calculate storage capacity of indexes in DB2 for z/OS
With the drastic growth of data stored in IBM DB2 tables, index sizes are bound to increase. Most indexes are still uncompressed, so there's an urgent need to monitor indexes to avoid unforeseen outages related to index capacity. This article describes a process to calculate the capacity limit for various types of indexes. Once you know how to calculate the capacity of indexes in different situations, you can monitor their growth to avoid outages.
12 Jun 2014
Convert row-organized tables to column-organized tables in DB2 10.5 with BLU Acceleration
With IBM DB2 10.5 you can easily convert row-organized tables to column-organized tables by using command line utilities or IBM Optim Data Studio 4.1. This article introduces two approaches for table conversions: the db2convert command and the ADMIN_MOVE_TABLE stored procedure. We also describe a manual conversion. Learn the advantages and disadvantages of the different conversion approaches. Best practices are also discussed.
12 Jun 2014
DB2 10.1 fundamentals certification exam 610 prep, Part 2: DB2 security
This tutorial introduces authentication, authorization, privileges, and roles as they relate to IBM DB2 10.1. It also introduces granular access control and trusted contexts. This is the second in a series of six tutorials designed to help you prepare for the DB2 10.1 Fundamentals certification exam (610). It is assumed that you already have basic knowledge of database concepts and operating system security.
05 Jun 2014
Understand the "Heartbleed" bug
Learn the technical details of the "Heartbleed" bug.
28 May 2014
Parallel processing of unstructured data, Part 3: Extend the sashyReader
This series explores how to process unstructured data in parallel fashion — within a machine and across a series of machines — using the power of IBM DB2 for Linux, UNIX and Windows (LUW) and GPFS shared-nothing cluster (SNC) to provide efficient, scalable access to unstructured data through a standard SQL interface. In this article, see how the Java-based sashyReader framework leverages the architectural features in DB2 LUW. The sashyReader provides for parallel and scalable processing of unstructured data stored locally or on a cloud via an SQL interface. This is useful for data ingest, data cleansing, data aggregation, and other tasks requiring the scanning, processing, and aggregation of large unstructured data sets. You also learn how to extend the sashyReader framework to read arbitrary unstructured text data by using dynamically pluggable Python classes.
22 May 2014
Measure the impact of DB2 with BLU Acceleration using IBM InfoSphere Optim Workload Replay
In this article, learn to use IBM InfoSphere Workload Replay to validate the performance improvement of InfoSphere Optim Query Workload Tuner (OQWT) driven implementation of DB2 with BLU Acceleration on your production databases. The validation is done by measuring the actual runtime change of production workloads that are replayed in an isolated pre-production environment.
Also available in: Chinese  
08 May 2014
Whitepaper: Protecting your critical data with integrated security intelligence
Learn how an integrated approach for extending security intelligence with data security insights can help organizations prevent attacks, ensure compliance, and reduce the overall costs of security management.
06 May 2014
Migrating 32-bit Informix ODBC applications to 64-bit
Informix 64-bit ODBC driver binaries have been available for many years, but the true 64-bit Informix ODBC driver was not introduced until Informix Client SDK v4.10 in early 2013. This article discusses the differences between the Informix 64-bit binaries of the Informix ODBC driver and the newer, true 64-bit driver. Also learn how to migrate your current 32-bit or 64-bit Informix ODBC applications to take advantage of the true 64-bit driver.
Also available in: Chinese  
01 May 2014
DB2 10.1 DBA for Linux, UNIX, and Windows certification exam 611 prep, Part 6: High availability
This tutorial highlights the data integrity skills you need in order to protect your database against unexpected failures. Learn how to configure and manage the high availability (HA) features of DB2 V10.1, which introduced the HADR multiple standby setup that provides a true HA and disaster recovery (DR) solution for your mission-critical databases. Examples illustrate how to configure this feature. You also learn about the DB2 pureScale technology that provides continuous HA to your critical business operations. This is part 6 of a series of eight DB2 10.1 DBA for Linux, UNIX, and Windows certification exam 611 tutorials.
01 May 2014
Archiving and recovery solutions for IBM Business Process Manager using InfoSphere Optim
Using simplified examples, this article shows how you can use IBM InfoSphere Optim to control data growth and, at the same time, maintain data privacy with IBM BPM.
30 Apr 2014
System Administration Certification exam 919 for Informix 11.70 prep, Part 1: Informix installation and configuration
In this tutorial, you'll learn about IBM Informix database server installation, configuration and upgrade process and strategies, and about configure the different security options available in Informix. In addition, learn how to use different types of connections with database server. This tutorial prepares you for Part 1 of the System Administration Certification exam 919 for Informix v11.70.
24 Apr 2014
XML or JSON: Guidelines for what to choose for DB2 for z/OS
IBM DB2 for z/OS offers document storage support for both JSON and XML. It is not always apparent whether JSON or XML is most suitable for a particular application. This article provides guidelines to help you select XML or JSON. It includes examples of creating, querying, updating, and managing in both JSON and XML in DB2 for z/OS.
27 Mar 2014
Use change data capture technology in InfoSphere Data Replication with InfoSphere BigInsights
Learn how to capture the changes made on source transactional databases such as IBM DB2 and Oracle, and replicate them to the Apache Hadoop Distributed File System in IBM InfoSphere BigInsights. Use change data capture replication technology in InfoSphere Data Replication 10.2 for InfoSphere DataStage with InfoSphere BigInsights support.
25 Mar 2014
Using the MDM Application Toolkit to build MDM-centric business processes, Part 3: Manage MDM hierarchies in BPM with the Application Toolkit
This is the third article in a series that describes how to create process applications for master data by using IBM Business Process Manager (BPM). Specifically, this series refers to the IBM InfoSphere Master Data Management (MDM) application toolkit and IBM BPM 8.0.1, both of which are provided with InfoSphere MDM 11.0. This article focuses on extending the BPM Hello World scenario (introduced in Part 1) by displaying MDM hierarchical data in a BPM application. Learn how a REST server for the Application Toolkit is installed and configured to retrieve data from the MDM operational server. You'll use the IBM Process Designer, which is a component of BPM, to create and modify process applications.
15 Mar 2014
Speed up debugging of triggers, nested routines, and anonymous blocks with IBM Data Studio
IBM Data Studio is an Eclipse-based development tool for database developers and administrators. It offers a wide variety of features, including tools for debugging complex routines. Data Studio 4.1 has new features for DB2 for Linux, UNIX, and Windows (LUW) routine debugging. In this article, learn to use the new features to debug triggers, nested routines, and anonymous blocks quickly. Accelerate development of your routines with Data Studio 4.1.
13 Mar 2014
Parallel processing of unstructured data, Part 2: Use AWS S3 as an unstructured data repository
See how unstructured data can be processed in parallel fashion. Leverage the power of IBM DB2 for Linux, UNIX and Windows to provide efficient highly scalable access to unstructured data stored on the cloud.
Also available in: Russian  
13 Mar 2014
Ensuring transactional consistency with Netezza when using CDC and DataStage
This article shows you how to configure the Netezza Connector properly when transactions coming from InfoSphere Data Replication’s Change Data Capture (CDC) are first passed through DataStage before being written to PureData System for Analytics (Netezza). Walk through a use case where a flat file implementation provides near real-time experience. The author highlights Netezza Connector implementations that do and do not work.
06 Mar 2014
Integrate InfoSphere Streams with InfoSphere Data Explorer
Learn how to integrate IBM InfoSphere Streams with InfoSphere Data Explorer to enable Streams operators to connect to Data Explorer to insert and update records. The article focuses on InfoSphere Streams 3.0 or higher and InfoSphere Data Explorer 8.2.2 or 8.2.3.
04 Mar 2014
InfoSphere Guardium and the Amazon cloud, Part 1: Explore Amazon RDS database instances and vulnerabilities
The growing number of relational databases on the cloud accentuates the need for data protection and auditing. IBM InfoSphere Guardium offers real time database security and monitoring, fine-grained database auditing, automated compliance reporting, data-level access control, database vulnerability management, and auto-discovery of sensitive data in the cloud. With the Amazon Relational Database Service (RDS) you can create and use your own database instances in the cloud and build your own applications around them. This two-part series explores how to use Guardium to protect database information in the cloud. This article describes how to use Guardium's discovery and vulnerability assessment with Amazon RDS database instances. Part 2 will cover how Guardium uses Amazon S3 for backup and restore.
Also available in: Chinese   Portuguese   Spanish  
27 Feb 2014
Optimizing BDFS jobs using InfoSphere DataStage Balanced Optimization
This article explains how to use InfoSphere DataStage Balanced Optimization to rewrite Big Data File Stage jobs into Jaql. BDFS stage operates on InfoSphere BigInsights. BDFS stage is a new stage introduced in InfoSphere Information Server 9.1. To optimize performance of BDFS jobs, InfoSphere DataStage Balanced Optimization is required. It redesigns a job to maximize performance by minimizing the amount of input and output performed, and by balancing the processing against source, intermediate, and target environments. Readers will learn how to use InfoSphere DataStage Balanced Optimization in association with BDFS stage and configuration parameters required for Jaql Connector when a job is optimized.
20 Feb 2014
RUNSTATS: What's new in DB2 10 for Linux, UNIX, and Windows
DB2 10.1 provides significant usability, performance, serviceability, and database administration enhancements for database statistics. In this article, learn about the significant performance enhancements to the RUNSTATS facility. Examples show how to take advantage of new features such as new keywords, index sampling options, enhancements to automatic statistics collection, and functions to query asynchronous automatic runstats work.
Also available in: Russian  
13 Feb 2014
Using R with databases
R is not just the 18th letter of the English language alphabet, it is a very powerful open source programming language that excels at data analysis and graphics. This article explains how to use the power of R with data that's housed in relational database servers. Learn how to use R to access data stored in DB2 with BLU Acceleration and IBM BLU Acceleration for Cloud environments. Detailed examples show how R can help you explore data and perform data analysis tasks.
Also available in: Russian  
06 Feb 2014
IBM Business Analytics Proven Practices: IBM Cognos BI Dispatcher Routing Explained
A short document to explain the main concepts of IBM Cognos Dispatchers when routing requests.
Also available in: Chinese   Russian   Spanish  
31 Jan 2014
Parallel processing of unstructured data, Part 1: With DB2 LUW and GPFS SNC
Learn how unstructured data can be processed in parallel fashion -- within a machine and across a series of machines -- by leveraging DB2 Linux, UNIX, and Windows and GPFS SNC to provide efficient highly scalable access to unstructured data, all through a standard SQL interface. Realize this capability with clusters of commodity hardware, suitable for provisioning in the cloud or directly on bare metal clusters of commodity hardware. Scalability is achieved within the framework via the principle of computation locality. Computation is performed local to the host which has direct data access, thus minimizing or eliminating network bandwidth requirements and eliminating the need for any shared compute resource.
Also available in: Russian  
30 Jan 2014
DB2 monitoring: Tracing SQL statements by using an activity event monitor
This article describes a technique to trace (capture) easily the SQL statements that a client application executes. This technique uses monitoring features in the IBM DB2 for Linux, UNIX, and Windows software, Version 9.7 Fix Pack 4 and higher.
Also available in: Russian  
23 Jan 2014
Migrating from Sybase to DB2, Part 1: Project description
This article describes the processes and techniques used to migrate trigger code from Transact SQL (Sybase) into SQL PL (DB2). Part 1 describes the intended goal and scope of the project. Part 2 talks about the considerations and challenges we had to overcome to make the database vendor transparent to the application.
Also available in: Russian  
16 Jan 2014
DB2 10.1 fundamentals certification exam 610 prep, Part 4: Working with DB2 Data using SQL
This tutorial shows you how to use SQL statements such as SELECT, INSERT, UPDATE, DELETE, and MERGE to manage data in tables of a SAMPLE database. It also shows how to perform transactions by using the COMMIT, ROLLBACK, and SAVEPOINT statements, how to create stored procedures and user-defined functions, and how to use temporal tables. This is the fourth tutorial in the DB2 10.1 fundamentals certification exam 610 prep series of total six tutorials.
16 Jan 2014
IBM Entrepreneur Week
IBM Entrepreneur Week is a one-of-a-kind opportunity for you to meet, interact, and connect with entrepreneurs, venture capitalists, industry leaders, and academics from around the world. If you're a startup or entrepreneur, join us onlne for our inaugural IBM Entrepreneur Week, 3-7 Feb 2014. There will be events taking place online and in locations worldwide, including face-to-face and virtual mentoring sessions, a women entrepreneur-focused event, and a LiveStream broadcast of the SmartCamp Global Finals in San Francisco.
15 Jan 2014
Applying IBM InfoSphere Information Analyzer rules for operational quality measurements
A key challenge in the day to day management of any solution is in measuring whether the solution components are meeting IT and business expectations. Given these requirements, it becomes incumbent upon IT to build processes where performance against these objectives can be tracked. By employing such measurements, IT can then take action whenever thresholds for expected operational behavior are being exceeded. Assessing and monitoring operational quality on information integration processes requires establishing rules that are meaningful in relation to the existing operational metadata. Rather than start off with a blank slate, this article demonstrates how to use pre-built rule definitions from IBM's InfoSphere Information Analyzer to get under way in tracking the operational quality of IBM InfoSphere Information Server's data integration processing.
09 Jan 2014
What's new in InfoSphere Workload Replay for DB2 for z/OS v2.1
InfoSphere Optim Workload Replay for DB2 for z/OS (Workload Replay) extends traditional database test coverage. Now you can capture production workloads and replay them in your test environment without the need to set up a complex client and middle-ware infrastructure. In October 2013, version 2.1 of Workload Replay was released, with key enhancements that we describe in this article.
19 Dec 2013
DB2 10.1 for Linux, UNIX, and Windows DBA certification exam 611 prep, Part 1: Server management
This tutorial helps you learn the skills required to manage DB2 database servers, instances and databases. Furthermore, you will get introduced to DB2 autonomic computing capabilities and you will learn to use IBM data Studio to perform database administration tasks such as job scheduling and generating diagrams of access plans. This tutorial prepares you for Part 1 of the DB2 10.1 DBA for Linux, UNIX, and Windows certification exam 611.
Also available in: Chinese  
05 Dec 2013
Developing InfoSphere MDM Collaboration Server Java API based extension points
IBM InfoSphere Master Data Management (MDM) Collaborative Edition is the master data management solution middleware that establishes a single, integrated, consistent view of master data inside and outside of an enterprise and supports the building of master data in collaborative ways. MDM Collaborative Edition can handle master data management for diverse domains like retail, telecom, banking, energy and can address different use cases. To achieve this, MDM Collaborative Edition provides flexible data modeling, direct and workflow-based collaborative authoring of master data, import and export of master data in various formats, and various integration capabilities. MDM Collaborative Edition can deliver relevant and unique content to any person, system, partner, or customer, thereby accelerating the time-to-market and reduce costs. MDM Collaborative Edition exposes several interfaces through which the system can be customized: MDM Collaborative Edition Scripting and MEM Collaborative Edition Java APIs being the major interfaces. This article provides the details on developing applications using MDM Collaborative Edition Java APIs in the Eclipse Integrated Development Environment (IDE). Steps required to set up the environment, create the Java API-based project, run the classes, and debug using Eclipse IDE are highlighted in this article. Sample code sections and screen shots are used to illustrate the steps used in the development.
05 Dec 2013
Process big data with Big SQL in InfoSphere BigInsights
SQL is a practical querying language, but is has limitations. Big SQL enables you to run complex queries on non-tabular data and query it with an SQL-like language. The difference with Big SQL is that you are accessing data that may be non-tabular, and may in fact not be based upon a typical SQL database structure. Using Big SQL, you can import and process large volume data sets, including by taking the processed output of other processing jobs within InfoSphere BigInsights to turn that information into easily query-able data. In this article, we look at how you can replace your existing infrastructure and queries with Big SQL, and how to take more complex queries and convert them to make use of your Big SQL environment.
Also available in: Russian  
03 Dec 2013
Optim Service Interface usage guide
This article demonstrates how to create a custom user experience for managing Optim services by tapping into functionality provided by the Optim Service Interface (OSI). The OSI provides a headless public interface to a number of the same back-end web services used by the Optim Manager web application. It allows you to create your own front end that communicates with public RESTful resources through the marshaling of clearly defined XML payloads. A sample web application demonstrating this power and flexibility accompanies this article.
Also available in: Chinese  
21 Nov 2013
InfoSphere Data Architect: Best practices for modeling and model management
This article explains how to create models with better maintainability and to change the models in a team environment using best practices. The article also details the product settings that you should set for the product to work at optimal level with the given resources.
14 Nov 2013
DB2 Advanced Copy Services: The scripted interface for DB2 Advanced Copy Services, Part 3
The IBM DB2 Advanced Copy Services (DB2 ACS) support taking snapshots for backup purposes in DB2 for Linux, UNIX and Windows databases. You can use the DB2 ACS API either through libraries implemented by your storage hardware vendors (wheras until now, only some do) or you can implement this API yourself which however, involve a high effort. This changes with IBM DB2 10.5.
07 Nov 2013
Compare the distributed DB2 10.5 database servers
In a side-by-side comparison table, the authors make it easy to understand the basic licensing rules, functions, and feature differences among the members of the distributed DB2 10.5 for Linux, UNIX, and Windows server family as of June 14, 2013.
Also available in: Chinese   Japanese  
04 Nov 2013
Licensing distributed DB2 10.5 servers in a high availability (HA) environment
Are you trying to license your IBM DB2 Version 10.5 for Linux, UNIX, and Windows servers correctly in a high availability environment? Do you need help interpreting the announcement letters and licenses? This article explains it all in plain English for the DB2 10.5 release that became generally available on June 14, 2013.
Also available in: Chinese   Japanese  
04 Nov 2013
DB2 editions: Which distributed edition of DB2 10.5 is right for you?
Learn the details of what makes each edition of IBM DB2 10.5 for Linux, UNIX, and Windows unique. The authors lay out the specifications for each edition, licensing considerations, historical changes throughout the DB2 release cycle, and references to some interesting things that customers are doing with DB2. This popular article will be updated during the release with any intra-version licensing changes that are announced in future fix packs.
Also available in: Chinese   Japanese  
03 Nov 2013
Tuning queries and workloads from InfoSphere Optim Performance Manager
You can learn more about query tuning using the InfoSphere Optim Performance Manager (OPM) web console. There are step-by-step descriptions showing you how to tune and manage recommendations in the OPM web console. You can use the OPM supplied tuning configuration scripts to configure the monitored database for tuning. There are tips on troubleshooting tuning-related problems and database table space management.
Also available in: Russian  
31 Oct 2013
InfoSphere Guardium data security and protection for MongoDB, Part 1: Overview of the solution and data security recommendations
This article series describes how to monitor and protect MongoDB data using IBM InfoSphere Guardium. Part 1 describes an overview of the solution, the architecture, and the benefits of using InfoSphere Guardium with MongoDB. The value of the fast growing class of NoSQL databases such as MongoDB is the ability to handle high velocity and volumes of data while enabling greater agility with dynamic schemas. Many organizations are just getting started with MongoDB, and now is the time to build security into the environment to save time, prevent breaches, and avoid compliance violations. This article series describes configuration of the solution, sample monitoring use cases, and additional capabilities such as quick search of audit data and building a compliance workflow using an audit process.
Also available in: Chinese   Portuguese  
30 Oct 2013
InfoSphere Guardium data security and protection for MongoDB Part 2: Configuration and policies
This article series describes how to monitor and protect MongoDB data using IBM InfoSphere Guardium, including the configuration of the solution, sample monitoring use cases, and additional capabilities such as quick search of audit data and building a compliance workflow using an audit process. Part 2 describes how to configure InfoSphere Guardium to collect MongoDB traffic and describes how to create security policy rules for a variety of typical data protection use cases, such as alerting on excessive failed logins, monitoring privileged users, and alerting on unauthorized access to sensitive data. Many organizations are just getting started with MongoDB, and now is the time to build security into the environment to save time, prevent breaches, and avoid compliance violations.
Also available in: Chinese   Portuguese  
30 Oct 2013
Using the MDM Application Toolkit to build MDM-centric business processes, Part 2: Performing CRUD operations against MDM using the Application Toolkit
This is the second in a series of articles that describe how to create process applications for master data by using IBM Business Process Manager (BPM). Specifically, this series refers to the InfoSphere Master Data Management (MDM) Application Toolkit and IBM BPM 8.0.1, both of which are provided with InfoSphere MDM 11.0. This article focuses on extending the Hello World scenario to perform a full set of create, retrieve, update, and delete (CRUD) operations against an MDM operational server. The article shows you how to create human services quickly and simply within BPM to drive operations on the MDM server. I start by constructing a create process, and then proceed with update, delete, and retrieve. This article focuses on the speed and simplicity with which you can create CRUD business processes using the MDM Application Toolkit. While more advanced interactions are possible, they are deferred for later articles in the series.
24 Oct 2013
Using InfoSphere Streams with memcached and Redis
InfoSphere Streams is a powerful middleware product that provides a platform for development of streaming applications and their execution in a fast and distributed manner. In a stream processing system, there is often a need to externalize the application-related state information and share it with other distributed application components. It is possible to do distributed data sharing within the context of a Streams application. This is achieved through the use of the Distributed Process Store (dps) toolkit that provides a key-value store abstraction. The shared state can be stored in memcached or Redis -- two popular open-source distributed state management systems.
22 Oct 2013
Guaranteed delivery with InfoSphere DataStage
This article describes how you can use the InfoSphere DataStage Distributed Transaction Stage to guarantee delivery of data. It also addresses the use of local transactions within DataStage database stages. Finally, it describes how the Change Data Capture Transaction stage works with InfoSphere Data Replication to guarantee the delivery of changes to a target database.
Also available in: Chinese  
17 Oct 2013
DB2 Linux, Unix and Windows HADR Simulator use case and troubleshooting guide
Although DB2 high availability disaster recovery (HADR) is billed as a feature that's easy to set up, customers often have problems picking the right settings for their environment. This article is a use case, and it shows how you can use the HADR simulator tool to configure and troubleshoot your HADR configuration in a real-world scenario. Using the examples and generalized guidance that this article provides, you should be able to test your own setups and pick the optimal settings.
17 Oct 2013
Develop custom KPIs using the Policy Monitoring JobFramework
This article discusses the basic structure of the JobFramework and its application to the definition of a custom KPI, using the Latency KPI as an example. The Latency KPI calculates the time that is required to propagate data changes from the data sources to the operational server, which is an important characteristic of the data consistency and trustworthiness. This article also describes how to navigate the new Latency KPI reports using IBM Cognos Business Intelligence Server.
Also available in: Chinese  
10 Oct 2013
DB2 Advanced Copy Services: The scripted interface for DB2 Advanced Copy Services, Part 2
The DB2 Advanced Copy Services supports taking snapshots for backup purposes in DB2 for Linux, UNIX and Windows databases. Customers can use the DB2 ACS API either through libraries implemented by their storage hardware vendors or implement this API on their own. Additionally, it requires a high degree of effort for customers to implement. This changes with IBM DB2 10.5.
10 Oct 2013
A seamless upgrade to DB2 Text Search V10.5
IBM DB2 Text Search is integrated with DB2 V10.5 and has been available since V9.5. It is equipped with a highly sophisticated, feature-rich, full-text search server and provides powerful indexing and search capabilities. This article describes various methods of upgrading to DB2 Text search V10.5. It uses an example to walk you through an upgrade scenario. In addition, it provides troubleshooting hints to resolve common upgrade problems.
03 Oct 2013
Using the MDM Application Toolkit to build MDM centric business processes, Part 1: Integrate BPM with MDM
This is the first article in a series that describes how to integrate IBM Business Process Manager (BPM) and master data. Specifically, this series refers to BPM 8.0.1 and the InfoSphere Master Data Management (MDM) Application Toolkit, both of which are provided with MDM 11.0. This article describes a Hello World scenario that shows you how to use the application toolkit to search for and retrieve data from MDM. This data is then displayed on a BPM Coach.
03 Oct 2013
Compare the Informix Version 12 editions
Get an introduction to the various editions of IBM Informix, and compare features, benefits, and licensing considerations in a side-by-side table. Regardless of which edition you choose, Informix brings you legendary ease-of-use, reliability, stability, and access to extensibility features.
Also available in: Chinese   Russian   Portuguese  
01 Oct 2013
System log analysis using InfoSphere BigInsights and IBM Accelerator for Machine Data Analytics
When understood, logs are a goldmine for debugging, performance analysis, root-cause analysis, and system health assessment. In this real business case, see how InfoSphere BigInsights and the IBM Accelerator for Machine Data Analytics are used to analyze system logs to help determine root causes of performance issues, and to define an action plan to solve problems and keep the project on track.
01 Oct 2013
Informix Dynamic Server data compression and storage optimization
Starting with IBM Informix Dynamic Server (IDS) Version 11.50.xC4, you can compress data and optimize storage in IDS databases. The advantages of data compression and storage optimization include significant storage savings, reduced I/O activity, and faster backup and restore. IDS provides full online support for enabling storage optimization and compressing existing table data, while applications continue to use the table. This article provides an overview of IDS data compression and storage optimization functionality and shows you how to perform both tasks.
Also available in: Chinese  
26 Sep 2013
Why low cardinality indexes negatively impact performance
Low cardinality indexes can be bad for performance. But, why? There are many best practices like this that DBAs hear and follow but don't always understand the reason behind. This article will empower the DBA to understand the logic behind why low cardinality indexes can be bad for performance or cause erratic performance. The topics that are covered in this article include B-tree indexes, understanding index cardinality, hypothetical examples of the effects of low cardinality indexes, a real-world example of the effects of a low cardinality index, and tips on how to identify low cardinality indexes and reduce their impact on performance.
26 Sep 2013
Working with Big SQL extended and complex data types
Big SQL, a SQL interface introduced in InfoSphere BigInsights, offers many useful extended data types. In general, a data type defines the set of properties for values being represented and these properties dictate how the values are treated. Big SQL supports a rich set of data types, including extended data types that are not supported by Apache Hive. With data types supported by Big SQL, it's easier to represent and process semi-structured data. Using the code samples and queries included, learn how to use Big SQL complex data types in simple and nested form and how to create and implement these types in an application. As an added bones, see how to use the Serializer Deserializer (SerDe) to work with JSON data.
Also available in: Chinese   Russian  
24 Sep 2013
DB2 with BLU Acceleration: A rapid adoption guide
You have probably heard how DB2 with BLU Acceleration can provide performance improvements ranging from 10x to 25x and beyond for analytical queries with minimal tuning. You are probably eager to understand how your business can leverage this cool technology for your warehouse or data mart. The goal of this article is to provide you with a quick and easy way to get started with BLU. We present a few scenarios to illustrate the key setup requirements to start leveraging BLU technology for your workload.
19 Sep 2013
Get to know the R-project Toolkit in InfoSphere Streams
InfoSphere Streams addresses a crucial emerging need for platforms and architectures that can process vast amounts of generated streaming data in real time. The R language is popular and widely used among statisticians and data miners for developing statistical software for data manipulation, statistical computations, and graphical displays. Learn about the InfoSphere Streams R-project Toolkit that integrates with the powerful R suite of software facilities and packages.
Also available in: Chinese   Russian  
17 Sep 2013
Migrate terabytes of data from IBM Balanced Warehouse to IBM Smart Analytics System
In the current, ever-demanding world, data warehouse environments are continuing to grow exponentially both in terms of data and real-time data processing requirements. To meet these demanding needs, organizations have to make right decisions to move the applications to the right platform, and more importantly, at the right time. Reckitt Benckiser Group plc. was an early adopter of the IBM Balanced Configuration Unit (BCU) Warehouse and recently upgraded to the next generation IBM Smart Analytics System (ISAS) to help financial customers with a better user experience, while providing higher data capacity.
12 Sep 2013
Getting started with real-time stream computing
Use InfoSphere Streams to turn volumes of data into information that helps predict trends, gain competitive advantage, gauge customer sentiment, monitor energy consumption, and more. InfoSphere Streams acts on data in motion for real-time analytics. Get familiar with the product and find out where to go for tips and tricks that speed implementation.
Also available in: Chinese   Russian  
10 Sep 2013
IBM Database Conversion Workbench, Part 1: Overview
The IBM Database Conversion Workbench (DCW) is a no-charge plug-in that adds database migration capabilities to IBM Data Studio. DCW integrates many of the tools used for database conversion into a single integrated environment, following an easy-to-use framework that is based on best practices from IBM migration consultants. This first article in the series provides an overview of conversion methodology and the various functions in DCW 2.0.
05 Sep 2013
Do I need to learn R?
R is a flexible programming language designed to facilitate exploratory data analysis, classical statistical tests, and high-level graphics. With its rich and ever-expanding library of packages, R is on the leading edge of development in statistics, data analytics, and data mining. R has proven itself a useful tool within the growing field of big data and has been integrated into several commercial packages, such as IBM SPSS and InfoSphere, as well as Mathematica. This article offers a statistician's perspective on the value of R.
Also available in: Chinese  
03 Sep 2013
Configuring DB2 Text Search in a partitioned environment
DB2 Text Search enables DB2 database applications to perform full text-search by using embedded full text-search clauses in SQL and XQuery statements. This allows you to create powerful text-retrieval programs. DB2 Text Search supports full-text search in both non-partitioned and partitioned database environments. Partitioned setups are often used for large workloads, and as the text search index is partitioned according to the partitioning of the table, careful planning of configuration and administration tasks is needed to account for search performance and high availability requirements. The article describes the concepts behind the text index partitioning scheme and the impact on administration, as well as the configuration for text search of a sample partitioned database setup. In addition, it discusses monitoring features and workload control options.
29 Aug 2013
ZooKeeper fundamentals, deployment, and applications
Apache ZooKeeper is a high-performance coordination server for distributed applications. It exposes common services -- such as naming and configuration management, synchronization, and group services -- in a simple interface, relieving the user from the need to program from scratch. It comes with off-the-shelf support for implementing consensus, group management, leader election, and presence protocols. In this article, we will explore the fundamentals of ZooKeeper, then walk through a guide to set up and deploy a ZooKeeper cluster in a simulated miniature distributed environment. We will conclude with examples of how ZooKeeper is used in popular projects.
27 Aug 2013
Managing your InfoSphere Streams cluster with IBM Platform Computing
Today, the challenge for many organizations is extracting value from the imposing volumes of data available to them. Tackling the big data challenge can fundamentally improve how an organization does business and makes decisions. But managing your big data infrastructure doesn't have to be challenging. With the appropriate management strategy and tools, multiple large environments can be set up and managed efficiently and effectively. This article describes how to use IBM Platform Computing to set up and manage IBM InfoSphere Streams environments that will analyze big data in real time.
20 Aug 2013
Best practices for using InfoSphere Federation Server to integrate web service data sources
This article introduces the overall architecture of the Web services wrapper of the IBM InfoSphere Federation Server. It explains how to integrate data from web service providers by web service nicknames step-by-step. This article also introduces some of the restrictions of the Web services wrapper.
Also available in: Chinese  
15 Aug 2013
DB2 SQL development: Best practices
Most relational tuning experts agree that the majority of performance problems among applications that access a relational database are caused by poorly coded programs or improperly coded SQL. This article provides a reference for developers who need to tune an SQL statement or program. Most times, the slowness is directly related to how you design your programs or structure your SQL statements. This article gives you something to fall back on before calling DBAs or others, to try to improve the performance issue(s) at hand first.
Also available in: Chinese  
15 Aug 2013
Deploying and managing scalable web services with Flume
Machine-generated log data is valuable in locating causes of various hardware and software failures. The information derived from it can provide feedback in improving system architecture, reducing system degradation, and improving up-time. Recently, businesses have started using this log data for deriving business insight. Using a fault-tolerant architecture, Flume is a distributed, service for efficiently collecting, aggregating, and moving large amounts of log data. In this article, we will learn how to deploy and use Flume with a Hadoop cluster and a simple distributed web service.
Also available in: Russian  
13 Aug 2013
Use Optim Data Tools to get the most out of BLU Acceleration
BLU Acceleration is an exciting new feature in IBM DB2 10.5 that can speed up complex analytic workloads tremendously. Workloads run faster without the need for tuning. At the heart of this new technology is the column-organized format for user tables. It's easy to create and load tables in the new column-organized format, as is ongoing maintenance because there is no need for indexes or tuning materialized query tables (MQTs). This article walks you through three scenarios that illustrate how to use Data Studio and IBM InfoSphere Optim Query Workload Tuner (OQWT) with the new BLU Acceleration feature.
Also available in: Chinese   Russian  
08 Aug 2013

1 - 100 of 2550 results | Next Show Summaries | Hide Summaries Subscribe to search results (RSS)