Technical library

  • spacer Filter by products, topics, and types of content

    (114 Products)

    (113 Topics)

    (15 Industries)

    (15 Types)

 

1 - 100 of 3115 results | Next Show Summaries | Hide Summaries Subscribe to search results (RSS)

View Results
Title none Type none Date down
Integrate data with SoftLayer Object Storage by using IBM InfoSphere DataStage
This article illustrates how to use IBM InfoSphere DataStage to integrate with SoftLayer Object Storage.
Articles 10 Jul 2014
Integrate data with Cloudant and CouchDB NoSQL database using IBM InfoSphere Information Server
In this article, learn how to use the Cloudant database and CouchDB with the IBM InfoSphere DataStage V11.3 Hierarchical stage (previously called XMLConnector). Detailed examples show how to invoke the Cloudant API, using the REST service of the Hierarchical Data stage, to access and change data on the Cloudant DB with support of HTTP basic authentication. You can modify the example to perform the same integration operations on a CouchDB database. The examples also use other steps of the Hierarchical stage to retrieve documents and parse or compose the retrieved documents into a relational structure.
Articles 10 Jul 2014
Create a business intelligence and analytics service in Ruby with Analytics Warehouse on IBM Bluemix
The Analytics Warehouse Service available in IBM Bluemix provides a powerful, easy-to-use, and agile platform for business intelligence and analytics. It is an enterprise-class managed service that is powered by the in-memory optimized, column-organized BLU Acceleration data warehouse technology. This article demonstrates how easy it is to incorporate the Analytics Warehouse service into your application so that you can focus on your application.
Also available in: Chinese   Japanese  
Articles 10 Jul 2014
Deploy InfoSphere MDM Collaborative Edition onto a cluster, Part 1: Strategies for mixed clustered topologies on an application server
This two-part tutorial shows how to set up a typical IBM InfoSphere Master Data Management (MDM) Collaborative Edition V11 clustered environment. This first part describes how to deploy InfoSphere MDM Collaborative Edition V11 onto a cluster. Follow along step-by-step with detailed examples of several clustering strategies. Also learn about the required configuration to enable the IBM HTTP Server for caching and load balancing the cluster nodes.
Articles 10 Jul 2014
Integrate the Information Governance Catalog and IBM InfoSphere DataStage using REST
IBM InfoSphere DataStage is a data integration tool that lets you move and transform data between operational, transactional, and analytical target systems. In this article, learn to use InfoSphere DataStage to integrate with the REST resources of the Information Governance Catalog Glossary. A sample use case shows how to design a DataStage job to take advantage of the Hierarchical Data stage to access and author the Information Governance Catalog Glossary content. The REST step is a new capability of the Hierarchical Data stage (previously called XML Connector in InfoSphere Information Server) to invoke the REST Web services with support for different authentication mechanisms and Secure Socket Layer (SSL). Learn to parse the response body of the REST call and apply transformations on the data.
Articles 03 Jul 2014
IBM Cognos Business Intelligence
Download IBM Cognos Business Intelligence Developer Edition V10.2.1. IBM Cognos BI Developer Edition enables you to learn a rich set of business intelligence capabilities without incurring any upfront costs. Access any and all data sources and use reporting and analysis to experiment with how to deliver relevant information how, when, and where it is needed. Broaden your skills and learn how to build BI applications based on enterprise-class SOA foundation. Expand your opportunities in BI and become an IBM partner.
Also available in: Chinese   Portuguese  
Trial Downloads 02 Jul 2014
Explore the eXtreme Scale-based caching service options in IBM PureApplication System
Caching services are a popular solution to address performance and scalability issues for cloud enterprise applications. Explore three caching options available with the IBM PureApplication System cloud system: One built-in, one based on WebSphere eXtreme Scale that uses a virtual system pattern on a cluster, and one based on eXtreme Scale that uses a VSP with a core OS image.
Articles 28 Jun 2014
Build a data mining app using Java, Weka, and the Analytics Warehouse service
The Analytics Warehouse (formerly BLU Acceleration) service provides data warehousing and analytics as a service on IBM Bluemix. Developers can develop and deploy a heavy-duty analytic application using blazing-fast IBM BLU database technology offered in the cloud. Learn how to develop a data mining application using the Weka statistical analysis tool and leveraging the IBM BLU columnar database.
Also available in: Chinese   Japanese  
Articles 27 Jun 2014
Using the MDM Application Toolkit to build MDM-centric business processes, Part 4: Work with MDM integration services
This is the fourth article in a series that describes how to create process applications for master data by using IBM Business Process Manager (BPM). Specifically, this series refers to the IBM InfoSphere Master Data Management (MDM) application toolkit and IBM BPM 8.0.1, both of which are provided with InfoSphere MDM 11.0. This article explores the BPM integration services provided with the Application Toolkit. Learn how these services can help you to create workflows easily that integrate with InfoSphere MDM.
Articles 26 Jun 2014
Prepare the server environment to integrate IBM DataStage and InfoSphere Data Replication CDC
IBM InfoSphere Data Replication (IIDR) Change Data Capture (CDC) offers direct integration with InfoSphere DataStage by using a transaction stage job. In this article, follow the step-by-step process that outlines all the required tasks to prepare the DataStage environment to enable IIDR CDC integration.
Articles 19 Jun 2014
Simplifying database administration by using Administration Console
This video introduces the main features of Administration Console and shows you how to use it to identify and investigate database problems before they impact your business.
Articles 19 Jun 2014
Increase DB2 availability
This article demonstrates how to use a cloud provider to create a reliable tie-breaking method to avoid a split brain scenario. The procedure is for two node clusters running IBM DB2 for Linux, UNIX and Windows (LUW) and the integrated high availability (HA) infrastructure. See how to automate any two-node failover for DB2 LUW 10.1 or higher.
Articles 19 Jun 2014
Integrate DB2 for z/OS with InfoSphere BigInsights, Part 1: Set up the InfoSphere BigInsights connector for DB2 for z/OS
Learn how to set up integration between IBM DB2 11 for z/OS and IBM InfoSphere BigInsights. Enable access to structured and non-structured data that is stored in the Hadoop Distributed File System and send the results back to DB2, where the data can be integrated with online transactional data. Using a scenario that is common to all DB2 for z/OS users, learn to create a big data solution that uses the user-defined functions JAQL_SUBMIT and HDFS_READ to run jobs on InfoSphere BigInsights and retrieve the results with SQL.
Articles 17 Jun 2014
Calculate storage capacity of indexes in DB2 for z/OS
With the drastic growth of data stored in IBM DB2 tables, index sizes are bound to increase. Most indexes are still uncompressed, so there's an urgent need to monitor indexes to avoid unforeseen outages related to index capacity. This article describes a process to calculate the capacity limit for various types of indexes. Once you know how to calculate the capacity of indexes in different situations, you can monitor their growth to avoid outages.
Articles 12 Jun 2014
Convert row-organized tables to column-organized tables in DB2 10.5 with BLU Acceleration
With IBM DB2 10.5 you can easily convert row-organized tables to column-organized tables by using command line utilities or IBM Optim Data Studio 4.1. This article introduces two approaches for table conversions: the db2convert command and the ADMIN_MOVE_TABLE stored procedure. We also describe a manual conversion. Learn the advantages and disadvantages of the different conversion approaches. Best practices are also discussed.
Articles 12 Jun 2014
DB2 10.1 fundamentals certification exam 610 prep, Part 2: DB2 security
This tutorial introduces authentication, authorization, privileges, and roles as they relate to IBM DB2 10.1. It also introduces granular access control and trusted contexts. This is the second in a series of six tutorials designed to help you prepare for the DB2 10.1 Fundamentals certification exam (610). It is assumed that you already have basic knowledge of database concepts and operating system security.
Articles 05 Jun 2014
Understand the "Heartbleed" bug
Learn the technical details of the "Heartbleed" bug.
Articles 28 May 2014
Parallel processing of unstructured data, Part 3: Extend the sashyReader
This series explores how to process unstructured data in parallel fashion — within a machine and across a series of machines — using the power of IBM DB2 for Linux, UNIX and Windows (LUW) and GPFS shared-nothing cluster (SNC) to provide efficient, scalable access to unstructured data through a standard SQL interface. In this article, see how the Java-based sashyReader framework leverages the architectural features in DB2 LUW. The sashyReader provides for parallel and scalable processing of unstructured data stored locally or on a cloud via an SQL interface. This is useful for data ingest, data cleansing, data aggregation, and other tasks requiring the scanning, processing, and aggregation of large unstructured data sets. You also learn how to extend the sashyReader framework to read arbitrary unstructured text data by using dynamically pluggable Python classes.
Articles 22 May 2014
InfoSphere Guardium Vulnerability Assessment Evaluation Edition
Download the trial for a period of 30 days.
Trial Downloads 19 May 2014
Measure the impact of DB2 with BLU Acceleration using IBM InfoSphere Optim Workload Replay
In this article, learn to use IBM InfoSphere Workload Replay to validate the performance improvement of InfoSphere Optim Query Workload Tuner (OQWT) driven implementation of DB2 with BLU Acceleration on your production databases. The validation is done by measuring the actual runtime change of production workloads that are replayed in an isolated pre-production environment.
Also available in: Chinese  
Articles 08 May 2014
Whitepaper: Protecting your critical data with integrated security intelligence
Learn how an integrated approach for extending security intelligence with data security insights can help organizations prevent attacks, ensure compliance, and reduce the overall costs of security management.
Articles 06 May 2014
DB2 10.1 DBA for Linux, UNIX, and Windows certification exam 611 prep, Part 6: High availability
This tutorial highlights the data integrity skills you need in order to protect your database against unexpected failures. Learn how to configure and manage the high availability (HA) features of DB2 V10.1, which introduced the HADR multiple standby setup that provides a true HA and disaster recovery (DR) solution for your mission-critical databases. Examples illustrate how to configure this feature. You also learn about the DB2 pureScale technology that provides continuous HA to your critical business operations. This is part 6 of a series of eight DB2 10.1 DBA for Linux, UNIX, and Windows certification exam 611 tutorials.
Articles 01 May 2014
Migrating 32-bit Informix ODBC applications to 64-bit
Informix 64-bit ODBC driver binaries have been available for many years, but the true 64-bit Informix ODBC driver was not introduced until Informix Client SDK v4.10 in early 2013. This article discusses the differences between the Informix 64-bit binaries of the Informix ODBC driver and the newer, true 64-bit driver. Also learn how to migrate your current 32-bit or 64-bit Informix ODBC applications to take advantage of the true 64-bit driver.
Also available in: Chinese  
Articles 01 May 2014
Archiving and recovery solutions for IBM Business Process Manager using InfoSphere Optim
Using simplified examples, this article shows how you can use IBM InfoSphere Optim to control data growth and, at the same time, maintain data privacy with IBM BPM.
Articles 30 Apr 2014
System Administration Certification exam 919 for Informix 11.70 prep, Part 1: Informix installation and configuration
In this tutorial, you'll learn about IBM Informix database server installation, configuration and upgrade process and strategies, and about configure the different security options available in Informix. In addition, learn how to use different types of connections with database server. This tutorial prepares you for Part 1 of the System Administration Certification exam 919 for Informix v11.70.
Articles 24 Apr 2014
InfoSphere Workload Replay: Transferring workloads between Workload Replay servers
In this elearning course you will learn how to move captured DB2 workloads from one InfoSphere Workload Replay server to another InfoSphere Workload Replay server.
Demos 01 Apr 2014
XML or JSON: Guidelines for what to choose for DB2 for z/OS
IBM DB2 for z/OS offers document storage support for both JSON and XML. It is not always apparent whether JSON or XML is most suitable for a particular application. This article provides guidelines to help you select XML or JSON. It includes examples of creating, querying, updating, and managing in both JSON and XML in DB2 for z/OS.
Articles 27 Mar 2014
Use change data capture technology in InfoSphere Data Replication with InfoSphere BigInsights
Learn how to capture the changes made on source transactional databases such as IBM DB2 and Oracle, and replicate them to the Apache Hadoop Distributed File System in IBM InfoSphere BigInsights. Use change data capture replication technology in InfoSphere Data Replication 10.2 for InfoSphere DataStage with InfoSphere BigInsights support.
Articles 25 Mar 2014
InfoSphere® Workload Replay for DB2: Capturing, replaying, and analyzing workloads
In this e-learning course you will learn how to use the InfoSphere® Workload Replay web console to capture workloads, prepare them for replay, and then compare and analyze the execution results.
Demos 17 Mar 2014
Using the MDM Application Toolkit to build MDM-centric business processes, Part 3: Manage MDM hierarchies in BPM with the Application Toolkit
This is the third article in a series that describes how to create process applications for master data by using IBM Business Process Manager (BPM). Specifically, this series refers to the IBM InfoSphere Master Data Management (MDM) application toolkit and IBM BPM 8.0.1, both of which are provided with InfoSphere MDM 11.0. This article focuses on extending the BPM Hello World scenario (introduced in Part 1) by displaying MDM hierarchical data in a BPM application. Learn how a REST server for the Application Toolkit is installed and configured to retrieve data from the MDM operational server. You'll use the IBM Process Designer, which is a component of BPM, to create and modify process applications.
Articles 15 Mar 2014
Speed up debugging of triggers, nested routines, and anonymous blocks with IBM Data Studio
IBM Data Studio is an Eclipse-based development tool for database developers and administrators. It offers a wide variety of features, including tools for debugging complex routines. Data Studio 4.1 has new features for DB2 for Linux, UNIX, and Windows (LUW) routine debugging. In this article, learn to use the new features to debug triggers, nested routines, and anonymous blocks quickly. Accelerate development of your routines with Data Studio 4.1.
Articles 13 Mar 2014
Parallel processing of unstructured data, Part 2: Use AWS S3 as an unstructured data repository
See how unstructured data can be processed in parallel fashion. Leverage the power of IBM DB2 for Linux, UNIX and Windows to provide efficient highly scalable access to unstructured data stored on the cloud.
Articles 13 Mar 2014
Ensuring transactional consistency with Netezza when using CDC and DataStage
This article shows you how to configure the Netezza Connector properly when transactions coming from InfoSphere Data Replication’s Change Data Capture (CDC) are first passed through DataStage before being written to PureData System for Analytics (Netezza). Walk through a use case where a flat file implementation provides near real-time experience. The author highlights Netezza Connector implementations that do and do not work.
Articles 06 Mar 2014
Integrate InfoSphere Streams with InfoSphere Data Explorer
Learn how to integrate IBM InfoSphere Streams with InfoSphere Data Explorer to enable Streams operators to connect to Data Explorer to insert and update records. The article focuses on InfoSphere Streams 3.0 or higher and InfoSphere Data Explorer 8.2.2 or 8.2.3.
Articles 04 Mar 2014
IBM InfoSphere Data Architect V9.1.1
Download a free trial version of IBM InfoSphere Data Architect, a collaborative data design solution to discover, model, relate, and standardize diverse and distributed data assets. It enables users to create logical and physical data models, discover, explore, and visualize the structure of data sources, and discover or identify relationships between disparate data sources.
Also available in: Chinese   Portuguese  
Trial Downloads 28 Feb 2014
InfoSphere Guardium and the Amazon cloud, Part 1: Explore Amazon RDS database instances and vulnerabilities
The growing number of relational databases on the cloud accentuates the need for data protection and auditing. IBM InfoSphere Guardium offers real time database security and monitoring, fine-grained database auditing, automated compliance reporting, data-level access control, database vulnerability management, and auto-discovery of sensitive data in the cloud. With the Amazon Relational Database Service (RDS) you can create and use your own database instances in the cloud and build your own applications around them. This two-part series explores how to use Guardium to protect database information in the cloud. This article describes how to use Guardium's discovery and vulnerability assessment with Amazon RDS database instances. Part 2 will cover how Guardium uses Amazon S3 for backup and restore.
Also available in: Chinese   Portuguese   Spanish  
Articles 27 Feb 2014
Optimizing BDFS jobs using InfoSphere DataStage Balanced Optimization
This article explains how to use InfoSphere DataStage Balanced Optimization to rewrite Big Data File Stage jobs into Jaql. BDFS stage operates on InfoSphere BigInsights. BDFS stage is a new stage introduced in InfoSphere Information Server 9.1. To optimize performance of BDFS jobs, InfoSphere DataStage Balanced Optimization is required. It redesigns a job to maximize performance by minimizing the amount of input and output performed, and by balancing the processing against source, intermediate, and target environments. Readers will learn how to use InfoSphere DataStage Balanced Optimization in association with BDFS stage and configuration parameters required for Jaql Connector when a job is optimized.
Articles 20 Feb 2014
RUNSTATS: What's new in DB2 10 for Linux, UNIX, and Windows
DB2 10.1 provides significant usability, performance, serviceability, and database administration enhancements for database statistics. In this article, learn about the significant performance enhancements to the RUNSTATS facility. Examples show how to take advantage of new features such as new keywords, index sampling options, enhancements to automatic statistics collection, and functions to query asynchronous automatic runstats work.
Articles 13 Feb 2014
Using R with databases
R is not just the 18th letter of the English language alphabet, it is a very powerful open source programming language that excels at data analysis and graphics. This article explains how to use the power of R with data that's housed in relational database servers. Learn how to use R to access data stored in DB2 with BLU Acceleration and IBM BLU Acceleration for Cloud environments. Detailed examples show how R can help you explore data and perform data analysis tasks.
Also available in: Russian  
Articles 06 Feb 2014
IBM Business Analytics Proven Practices: IBM Cognos BI Dispatcher Routing Explained
A short document to explain the main concepts of IBM Cognos Dispatchers when routing requests.
Also available in: Chinese   Russian   Spanish  
Articles 31 Jan 2014
Use industry templates for advanced case management, Part 1: Introducing the Credit Card Dispute Management sample solution template for IBM Case Manager
IBM Case Manager provides the platform and tools for a business analyst to define and implement a new generation of case management solutions. To accelerate the development of solutions in particular industries, IBM Case Manager supports the notion of a solution template, which is a collection of case management assets that can be customized and extended to build a complete solution. To illustrate the value of solution templates and the features of IBM Case Manager, IBM has provided two sample solution templates that can be used as learning tools for users new to the platform. This tutorial introduces one of those templates: Credit Card Dispute Management from the financial services industry. This sample template can serve as a foundation for clients who want to build a similar solution. The template can also serve as a learning tool and reference for clients to build other solutions in other industries.
Also available in: Spanish  
Tutorial 31 Jan 2014
Use industry templates for advanced case management, Part 2: Introducing the Auto Claims Management sample solution template for IBM Case Manager
IBM Case Manager provides the platform and tools for business analysts to define and implement a new generation of case management solutions. To accelerate the development of solutions in particular industries, IBM Case Manager supports the notion of a solution template--a collection of case management assets that can be customized and extended to build a complete solution. To help illustrate the value of solution templates and the abilities of IBM Case Manager, IBM has provided two sample solution templates that can be used as learning tools for new users of the platform. This tutorial introduces one of those templates--Auto Claims Management--from the insurance services industry. Gain an understanding of what a template is, and learn about the assets delivered in this sample template and how they were built. (This tutorial includes the code for this sample template as well as instructions on how to deploy it.)
Also available in: Portuguese  
Tutorial 31 Jan 2014
DB2 and Optim Database Tools for IBM® DB2® Analytics Accelerator for z/OS®
This video gives an overview of IBM DB2 Analytics Accelerator for z/OS and the supporting DB2 and Optim Database tools.
Demos 31 Jan 2014
Parallel processing of unstructured data, Part 1: With DB2 LUW and GPFS SNC
Learn how unstructured data can be processed in parallel fashion -- within a machine and across a series of machines -- by leveraging DB2 Linux, UNIX, and Windows and GPFS SNC to provide efficient highly scalable access to unstructured data, all through a standard SQL interface. Realize this capability with clusters of commodity hardware, suitable for provisioning in the cloud or directly on bare metal clusters of commodity hardware. Scalability is achieved within the framework via the principle of computation locality. Computation is performed local to the host which has direct data access, thus minimizing or eliminating network bandwidth requirements and eliminating the need for any shared compute resource.
Articles 30 Jan 2014
IBM Cognos Express
Download Cognos Express, the first integrated business intelligence (BI) and planning solution built to meet the needs of workgroups and midsize organizations. It delivers the reporting, analysis, dashboard, scorecard, planning, budgeting and forecasting capabilities that organizations require at a price they can afford. As part of the Cognos family of products, Cognos Express enables businesses to start small and expand their BI solution over time as their needs grow.
Also available in: Chinese   Japanese   Portuguese  
Trial Downloads 28 Jan 2014
DB2 monitoring: Tracing SQL statements by using an activity event monitor
This article describes a technique to trace (capture) easily the SQL statements that a client application executes. This technique uses monitoring features in the IBM DB2 for Linux, UNIX, and Windows software, Version 9.7 Fix Pack 4 and higher.
Also available in: Russian  
Articles 23 Jan 2014
Migrating from Sybase to DB2, Part 1: Project description
This article describes the processes and techniques used to migrate trigger code from Transact SQL (Sybase) into SQL PL (DB2). Part 1 describes the intended goal and scope of the project. Part 2 talks about the considerations and challenges we had to overcome to make the database vendor transparent to the application.
Articles 16 Jan 2014
DB2 10.1 fundamentals certification exam 610 prep, Part 4: Working with DB2 Data using SQL
This tutorial shows you how to use SQL statements such as SELECT, INSERT, UPDATE, DELETE, and MERGE to manage data in tables of a SAMPLE database. It also shows how to perform transactions by using the COMMIT, ROLLBACK, and SAVEPOINT statements, how to create stored procedures and user-defined functions, and how to use temporal tables. This is the fourth tutorial in the DB2 10.1 fundamentals certification exam 610 prep series of total six tutorials.
Articles 16 Jan 2014
IBM Entrepreneur Week
IBM Entrepreneur Week is a one-of-a-kind opportunity for you to meet, interact, and connect with entrepreneurs, venture capitalists, industry leaders, and academics from around the world. If you're a startup or entrepreneur, join us onlne for our inaugural IBM Entrepreneur Week, 3-7 Feb 2014. There will be events taking place online and in locations worldwide, including face-to-face and virtual mentoring sessions, a women entrepreneur-focused event, and a LiveStream broadcast of the SmartCamp Global Finals in San Francisco.
Articles 15 Jan 2014
Applying IBM InfoSphere Information Analyzer rules for operational quality measurements
A key challenge in the day to day management of any solution is in measuring whether the solution components are meeting IT and business expectations. Given these requirements, it becomes incumbent upon IT to build processes where performance against these objectives can be tracked. By employing such measurements, IT can then take action whenever thresholds for expected operational behavior are being exceeded. Assessing and monitoring operational quality on information integration processes requires establishing rules that are meaningful in relation to the existing operational metadata. Rather than start off with a blank slate, this article demonstrates how to use pre-built rule definitions from IBM's InfoSphere Information Analyzer to get under way in tracking the operational quality of IBM InfoSphere Information Server's data integration processing.
Articles 09 Jan 2014
What's new in InfoSphere Workload Replay for DB2 for z/OS v2.1
InfoSphere Optim Workload Replay for DB2 for z/OS (Workload Replay) extends traditional database test coverage. Now you can capture production workloads and replay them in your test environment without the need to set up a complex client and middle-ware infrastructure. In October 2013, version 2.1 of Workload Replay was released, with key enhancements that we describe in this article.
Articles 19 Dec 2013
Creating an Insert Service
This video shows how to use Optim Designer to create and use an insert service.
Demos 10 Dec 2013
Creating an Extract Service
This video shows how to use Optim Designer to create and use an extract service.
Demos 10 Dec 2013
Developing InfoSphere MDM Collaboration Server Java API based extension points
IBM InfoSphere Master Data Management (MDM) Collaborative Edition is the master data management solution middleware that establishes a single, integrated, consistent view of master data inside and outside of an enterprise and supports the building of master data in collaborative ways. MDM Collaborative Edition can handle master data management for diverse domains like retail, telecom, banking, energy and can address different use cases. To achieve this, MDM Collaborative Edition provides flexible data modeling, direct and workflow-based collaborative authoring of master data, import and export of master data in various formats, and various integration capabilities. MDM Collaborative Edition can deliver relevant and unique content to any person, system, partner, or customer, thereby accelerating the time-to-market and reduce costs. MDM Collaborative Edition exposes several interfaces through which the system can be customized: MDM Collaborative Edition Scripting and MEM Collaborative Edition Java APIs being the major interfaces. This article provides the details on developing applications using MDM Collaborative Edition Java APIs in the Eclipse Integrated Development Environment (IDE). Steps required to set up the environment, create the Java API-based project, run the classes, and debug using Eclipse IDE are highlighted in this article. Sample code sections and screen shots are used to illustrate the steps used in the development.
Articles 05 Dec 2013
DB2 10.1 for Linux, UNIX, and Windows DBA certification exam 611 prep, Part 1: Server management
This tutorial helps you learn the skills required to manage DB2 database servers, instances and databases. Furthermore, you will get introduced to DB2 autonomic computing capabilities and you will learn to use IBM data Studio to perform database administration tasks such as job scheduling and generating diagrams of access plans. This tutorial prepares you for Part 1 of the DB2 10.1 DBA for Linux, UNIX, and Windows certification exam 611.
Articles 05 Dec 2013
Process big data with Big SQL in InfoSphere BigInsights
SQL is a practical querying language, but is has limitations. Big SQL enables you to run complex queries on non-tabular data and query it with an SQL-like language. The difference with Big SQL is that you are accessing data that may be non-tabular, and may in fact not be based upon a typical SQL database structure. Using Big SQL, you can import and process large volume data sets, including by taking the processed output of other processing jobs within InfoSphere BigInsights to turn that information into easily query-able data. In this article, we look at how you can replace your existing infrastructure and queries with Big SQL, and how to take more complex queries and convert them to make use of your Big SQL environment.
Also available in: Russian  
Articles 03 Dec 2013
JDBC Provider and Data Source Configuration for DB2 v9 in WAS v7
This demonstration video will cover JDBC provider and Data Source configuration for DB2 v9 in WAS v7.
Demos 03 Dec 2013
Optim Service Interface usage guide
This article demonstrates how to create a custom user experience for managing Optim services by tapping into functionality provided by the Optim Service Interface (OSI). The OSI provides a headless public interface to a number of the same back-end web services used by the Optim Manager web application. It allows you to create your own front end that communicates with public RESTful resources through the marshaling of clearly defined XML payloads. A sample web application demonstrating this power and flexibility accompanies this article.
Articles 21 Nov 2013
Try: IBM Data Studio: Available at no charge
Download IBM Data Studio, which provides database developers and DBAs with the basic set of required tools for database development and administration for IBM Data Servers.
Also available in: Chinese   Russian   Portuguese   Spanish  
Trial Downloads 15 Nov 2013
InfoSphere Data Architect: Best practices for modeling and model management
This article explains how to create models with better maintainability and to change the models in a team environment using best practices. The article also details the product settings that you should set for the product to work at optimal level with the given resources.
Articles 14 Nov 2013
DB2 Advanced Copy Services: The scripted interface for DB2 Advanced Copy Services, Part 3
The IBM DB2 Advanced Copy Services (DB2 ACS) support taking snapshots for backup purposes in DB2 for Linux, UNIX and Windows databases. You can use the DB2 ACS API either through libraries implemented by your storage hardware vendors (wheras until now, only some do) or you can implement this API yourself which however, involve a high effort. This changes with IBM DB2 10.5.
Articles 07 Nov 2013
Licensing distributed DB2 10.5 servers in a high availability (HA) environment
Are you trying to license your IBM DB2 Version 10.5 for Linux, UNIX, and Windows servers correctly in a high availability environment? Do you need help interpreting the announcement letters and licenses? This article explains it all in plain English for the DB2 10.5 release that became generally available on June 14, 2013.
Also available in: Chinese   Japanese  
Articles 04 Nov 2013
Compare the distributed DB2 10.5 database servers
In a side-by-side comparison table, the authors make it easy to understand the basic licensing rules, functions, and feature differences among the members of the distributed DB2 10.5 for Linux, UNIX, and Windows server family as of June 14, 2013.
Also available in: Chinese   Japanese  
Articles 04 Nov 2013
DB2 editions: Which distributed edition of DB2 10.5 is right for you?
Learn the details of what makes each edition of IBM DB2 10.5 for Linux, UNIX, and Windows unique. The authors lay out the specifications for each edition, licensing considerations, historical changes throughout the DB2 release cycle, and references to some interesting things that customers are doing with DB2. This popular article will be updated during the release with any intra-version licensing changes that are announced in future fix packs.
Also available in: Japanese  
Articles 03 Nov 2013
Tuning queries and workloads from InfoSphere Optim Performance Manager
You can learn more about query tuning using the InfoSphere Optim Performance Manager (OPM) web console. There are step-by-step descriptions showing you how to tune and manage recommendations in the OPM web console. You can use the OPM supplied tuning configuration scripts to configure the monitored database for tuning. There are tips on troubleshooting tuning-related problems and database table space management.
Also available in: Russian  
Articles 31 Oct 2013
InfoSphere Guardium data security and protection for MongoDB, Part 1: Overview of the solution and data security recommendations
This article series describes how to monitor and protect MongoDB data using IBM InfoSphere Guardium. Part 1 describes an overview of the solution, the architecture, and the benefits of using InfoSphere Guardium with MongoDB. The value of the fast growing class of NoSQL databases such as MongoDB is the ability to handle high velocity and volumes of data while enabling greater agility with dynamic schemas. Many organizations are just getting started with MongoDB, and now is the time to build security into the environment to save time, prevent breaches, and avoid compliance violations. This article series describes configuration of the solution, sample monitoring use cases, and additional capabilities such as quick search of audit data and building a compliance workflow using an audit process.
Also available in: Chinese   Portuguese  
Articles 30 Oct 2013
InfoSphere Guardium data security and protection for MongoDB Part 2: Configuration and policies
This article series describes how to monitor and protect MongoDB data using IBM InfoSphere Guardium, including the configuration of the solution, sample monitoring use cases, and additional capabilities such as quick search of audit data and building a compliance workflow using an audit process. Part 2 describes how to configure InfoSphere Guardium to collect MongoDB traffic and describes how to create security policy rules for a variety of typical data protection use cases, such as alerting on excessive failed logins, monitoring privileged users, and alerting on unauthorized access to sensitive data. Many organizations are just getting started with MongoDB, and now is the time to build security into the environment to save time, prevent breaches, and avoid compliance violations.
Also available in: Chinese   Portuguese  
Articles 30 Oct 2013
Using the MDM Application Toolkit to build MDM-centric business processes, Part 2: Performing CRUD operations against MDM using the Application Toolkit
This is the second in a series of articles that describe how to create process applications for master data by using IBM Business Process Manager (BPM). Specifically, this series refers to the InfoSphere Master Data Management (MDM) Application Toolkit and IBM BPM 8.0.1, both of which are provided with InfoSphere MDM 11.0. This article focuses on extending the Hello World scenario to perform a full set of create, retrieve, update, and delete (CRUD) operations against an MDM operational server. The article shows you how to create human services quickly and simply within BPM to drive operations on the MDM server. I start by constructing a create process, and then proceed with update, delete, and retrieve. This article focuses on the speed and simplicity with which you can create CRUD business processes using the MDM Application Toolkit. While more advanced interactions are possible, they are deferred for later articles in the series.
Articles 24 Oct 2013
Optim Data Tools in PureData System for Transactions: Part 1
This is the three part series video demo for Optim Data Tools in PureData System for Transactions.
Demos 24 Oct 2013
Optim Data Tools in PureData System for Transactions: Part 3
This is the three part series video demo for Optim Data Tools in PureData System for Transactions.
Demos 24 Oct 2013
Optim Data Tools in PureData System for Transactions: Part 2
This is the three part series video demo for Optim Data Tools in PureData System for Transactions.
Demos 24 Oct 2013
Using InfoSphere Streams with memcached and Redis
InfoSphere Streams is a powerful middleware product that provides a platform for development of streaming applications and their execution in a fast and distributed manner. In a stream processing system, there is often a need to externalize the application-related state information and share it with other distributed application components. It is possible to do distributed data sharing within the context of a Streams application. This is achieved through the use of the Distributed Process Store (dps) toolkit that provides a key-value store abstraction. The shared state can be stored in memcached or Redis -- two popular open-source distributed state management systems.
Articles 22 Oct 2013
DB2 Linux, Unix and Windows HADR Simulator use case and troubleshooting guide
Although DB2 high availability disaster recovery (HADR) is billed as a feature that's easy to set up, customers often have problems picking the right settings for their environment. This article is a use case, and it shows how you can use the HADR simulator tool to configure and troubleshoot your HADR configuration in a real-world scenario. Using the examples and generalized guidance that this article provides, you should be able to test your own setups and pick the optimal settings.
Articles 17 Oct 2013
Guaranteed delivery with InfoSphere DataStage
This article describes how you can use the InfoSphere DataStage Distributed Transaction Stage to guarantee delivery of data. It also addresses the use of local transactions within DataStage database stages. Finally, it describes how the Change Data Capture Transaction stage works with InfoSphere Data Replication to guarantee the delivery of changes to a target database.
Also available in: Chinese  
Articles 17 Oct 2013
Develop custom KPIs using the Policy Monitoring JobFramework
This article discusses the basic structure of the JobFramework and its application to the definition of a custom KPI, using the Latency KPI as an example. The Latency KPI calculates the time that is required to propagate data changes from the data sources to the operational server, which is an important characteristic of the data consistency and trustworthiness. This article also describes how to navigate the new Latency KPI reports using IBM Cognos Business Intelligence Server.
Also available in: Chinese  
Articles 10 Oct 2013
DB2 Advanced Copy Services: The scripted interface for DB2 Advanced Copy Services, Part 2
The DB2 Advanced Copy Services supports taking snapshots for backup purposes in DB2 for Linux, UNIX and Windows databases. Customers can use the DB2 ACS API either through libraries implemented by their storage hardware vendors or implement this API on their own. Additionally, it requires a high degree of effort for customers to implement. This changes with IBM DB2 10.5.
Articles 10 Oct 2013
A seamless upgrade to DB2 Text Search V10.5
IBM DB2 Text Search is integrated with DB2 V10.5 and has been available since V9.5. It is equipped with a highly sophisticated, feature-rich, full-text search server and provides powerful indexing and search capabilities. This article describes various methods of upgrading to DB2 Text search V10.5. It uses an example to walk you through an upgrade scenario. In addition, it provides troubleshooting hints to resolve common upgrade problems.
Articles 03 Oct 2013
Using the MDM Application Toolkit to build MDM centric business processes, Part 1: Integrate BPM with MDM
This is the first article in a series that describes how to integrate IBM Business Process Manager (BPM) and master data. Specifically, this series refers to BPM 8.0.1 and the InfoSphere Master Data Management (MDM) Application Toolkit, both of which are provided with MDM 11.0. This article describes a Hello World scenario that shows you how to use the application toolkit to search for and retrieve data from MDM. This data is then displayed on a BPM Coach.
Articles 03 Oct 2013
InfoSphere Optim Performance Manager Security – Part 1: Web console access and Managing privileges
This video is the first of two-part series that discusses the security feature in version 5.3 of the InfoSphere Optim Performance Manager (OPM).
Demos 02 Oct 2013
Compare the Informix Version 12 editions
Get an introduction to the various editions of IBM Informix, and compare features, benefits, and licensing considerations in a side-by-side table. Regardless of which edition you choose, Informix brings you legendary ease-of-use, reliability, stability, and access to extensibility features.
Also available in: Chinese   Russian   Portuguese  
Articles 01 Oct 2013
System log analysis using InfoSphere BigInsights and IBM Accelerator for Machine Data Analytics
When understood, logs are a goldmine for debugging, performance analysis, root-cause analysis, and system health assessment. In this real business case, see how InfoSphere BigInsights and the IBM Accelerator for Machine Data Analytics are used to analyze system logs to help determine root causes of performance issues, and to define an action plan to solve problems and keep the project on track.
Articles 01 Oct 2013
Informix Dynamic Server data compression and storage optimization
Starting with IBM Informix Dynamic Server (IDS) Version 11.50.xC4, you can compress data and optimize storage in IDS databases. The advantages of data compression and storage optimization include significant storage savings, reduced I/O activity, and faster backup and restore. IDS provides full online support for enabling storage optimization and compressing existing table data, while applications continue to use the table. This article provides an overview of IDS data compression and storage optimization functionality and shows you how to perform both tasks.
Also available in: Chinese  
Articles 26 Sep 2013
Why low cardinality indexes negatively impact performance
Low cardinality indexes can be bad for performance. But, why? There are many best practices like this that DBAs hear and follow but don't always understand the reason behind. This article will empower the DBA to understand the logic behind why low cardinality indexes can be bad for performance or cause erratic performance. The topics that are covered in this article include B-tree indexes, understanding index cardinality, hypothetical examples of the effects of low cardinality indexes, a real-world example of the effects of a low cardinality index, and tips on how to identify low cardinality indexes and reduce their impact on performance.
Articles 26 Sep 2013
Working with Big SQL extended and complex data types
Big SQL, a SQL interface introduced in InfoSphere BigInsights, offers many useful extended data types. In general, a data type defines the set of properties for values being represented and these properties dictate how the values are treated. Big SQL supports a rich set of data types, including extended data types that are not supported by Apache Hive. With data types supported by Big SQL, it's easier to represent and process semi-structured data. Using the code samples and queries included, learn how to use Big SQL complex data types in simple and nested form and how to create and implement these types in an application. As an added bones, see how to use the Serializer Deserializer (SerDe) to work with JSON data.
Also available in: Chinese   Russian  
Articles 24 Sep 2013
Better Business Decisions at a Lower Cost with InfoSphere BigInsights
As the rate of data growth increases exponentially, current data management methods are rendered inadequate. InfoSphere BigInsights, based on the open source computing framework Apache Hadoop, is designed to manage big data sources and help organizations manage the data and analytics required.
Redbooks 24 Sep 2013
Harness the power of big data: The IBM big data platform
Big data represents a new era of computing -- an inflection point of opportunity where data in any format may be explored and utilized for breakthrough insights -- whether that data is in place, in motion, or at rest. Boost your big data IQ by reading this new book from the authors of Understanding Big Data.
Books 24 Sep 2013
Context-Based Analytics in a Big Data World: Better Decisions
This IBM Redbooks publication explores context, the cumulative history derived from data observations about entities (people, places, things). Context is a critical component of analytic decision process. Without context, business conclusions might be flawed. By using context analytics with big data, organizations can derive trends, patterns, and relationships from unstructured data and related structured data. These insights can help an organization to make fact-based decisions to anticipate and shape business outcomes.
Redbooks 24 Sep 2013
Big Data Networked Storage Solution for Hadoop
In this IBM Redpaper, you'll find a reference architecture, based on Apache Hadoop, to help businesses gain control over their data, meet tight service-level agreements (SLAs) around data applications, and turn data-driven insight into effective action. With the Big Data Networked Storage Solution for Hadoop, get the ability to ingest, store, and manage large data sets with high reliability. The paper also touches on IBM InfoSphere BigInsights, which provides an innovative analytics platform that processes and analyzes all types of data to turn large complex data into insight.
Redbooks 24 Sep 2013
Unlock Big Value in Big Data with Analytics
In this IBM Redbooks publication, read how big data expands and evolves analytics that were not previously possible because of lack of available information, technology limitation, or prohibitive cost. Learn how big data can deliver more complete answers and new insights, improve processes and performance, and create new business models and differentiated services. Use this book to help you move beyond the hype to realize the value of big data.
Redbooks 24 Sep 2013
DB2 with BLU Acceleration: A rapid adoption guide
You have probably heard how DB2 with BLU Acceleration can provide performance improvements ranging from 10x to 25x and beyond for analytical queries with minimal tuning. You are probably eager to understand how your business can leverage this cool technology for your warehouse or data mart. The goal of this article is to provide you with a quick and easy way to get started with BLU. We present a few scenarios to illustrate the key setup requirements to start leveraging BLU technology for your workload.
Articles 19 Sep 2013
Get to know the R-project Toolkit in InfoSphere Streams
InfoSphere Streams addresses a crucial emerging need for platforms and architectures that can process vast amounts of generated streaming data in real time. The R language is popular and widely used among statisticians and data miners for developing statistical software for data manipulation, statistical computations, and graphical displays. Learn about the InfoSphere Streams R-project Toolkit that integrates with the powerful R suite of software facilities and packages.
Also available in: Chinese   Russian  
Articles 17 Sep 2013
Migrate terabytes of data from IBM Balanced Warehouse to IBM Smart Analytics System
In the current, ever-demanding world, data warehouse environments are continuing to grow exponentially both in terms of data and real-time data processing requirements. To meet these demanding needs, organizations have to make right decisions to move the applications to the right platform, and more importantly, at the right time. Reckitt Benckiser Group plc. was an early adopter of the IBM Balanced Configuration Unit (BCU) Warehouse and recently upgraded to the next generation IBM Smart Analytics System (ISAS) to help financial customers with a better user experience, while providing higher data capacity.
Articles 12 Sep 2013
Smarter Analytics: Driving Customer Interactions with the IBM Next Best Action Solution (IBM Redbooks)
What if your organization could increase customer satisfaction with every customer interaction? What if your customer-facing teams had the information and insight necessary to delight your customers every time they made contact? What if you could proactively provide service to your customers before they even know that they need it?
Redbooks 10 Sep 2013
Getting started with real-time stream computing
Use InfoSphere Streams to turn volumes of data into information that helps predict trends, gain competitive advantage, gauge customer sentiment, monitor energy consumption, and more. InfoSphere Streams acts on data in motion for real-time analytics. Get familiar with the product and find out where to go for tips and tricks that speed implementation.
Also available in: Chinese   Russian  
Articles 10 Sep 2013
IBM Database Conversion Workbench, Part 1: Overview
The IBM Database Conversion Workbench (DCW) is a no-charge plug-in that adds database migration capabilities to IBM Data Studio. DCW integrates many of the tools used for database conversion into a single integrated environment, following an easy-to-use framework that is based on best practices from IBM migration consultants. This first article in the series provides an overview of conversion methodology and the various functions in DCW 2.0.
Articles 05 Sep 2013
Do I need to learn R?
R is a flexible programming language designed to facilitate exploratory data analysis, classical statistical tests, and high-level graphics. With its rich and ever-expanding library of packages, R is on the leading edge of development in statistics, data analytics, and data mining. R has proven itself a useful tool within the growing field of big data and has been integrated into several commercial packages, such as IBM SPSS and InfoSphere, as well as Mathematica. This article offers a statistician's perspective on the value of R.
Also available in: Chinese  
Articles 03 Sep 2013
Configuring DB2 Text Search in a partitioned environment
DB2 Text Search enables DB2 database applications to perform full text-search by using embedded full text-search clauses in SQL and XQuery statements. This allows you to create powerful text-retrieval programs. DB2 Text Search supports full-text search in both non-partitioned and partitioned database environments. Partitioned setups are often used for large workloads, and as the text search index is partitioned according to the partitioning of the table, careful planning of configuration and administration tasks is needed to account for search performance and high availability requirements. The article describes the concepts behind the text index partitioning scheme and the impact on administration, as well as the configuration for text search of a sample partitioned database setup. In addition, it discusses monitoring features and workload control options.
Articles 29 Aug 2013
ZooKeeper fundamentals, deployment, and applications
Apache ZooKeeper is a high-performance coordination server for distributed applications. It exposes common services -- such as naming and configuration management, synchronization, and group services -- in a simple interface, relieving the user from the need to program from scratch. It comes with off-the-shelf support for implementing consensus, group management, leader election, and presence protocols. In this article, we will explore the fundamentals of ZooKeeper, then walk through a guide to set up and deploy a ZooKeeper cluster in a simulated miniature distributed environment. We will conclude with examples of how ZooKeeper is used in popular projects.
Articles 27 Aug 2013
Managing your InfoSphere Streams cluster with IBM Platform Computing
Today, the challenge for many organizations is extracting value from the imposing volumes of data available to them. Tackling the big data challenge can fundamentally improve how an organization does business and makes decisions. But managing your big data infrastructure doesn't have to be challenging. With the appropriate management strategy and tools, multiple large environments can be set up and managed efficiently and effectively. This article describes how to use IBM Platform Computing to set up and manage IBM InfoSphere Streams environments that will analyze big data in real time.
Articles 20 Aug 2013
Best practices for using InfoSphere Federation Server to integrate web service data sources
This article introduces the overall architecture of the Web services wrapper of the IBM InfoSphere Federation Server. It explains how to integrate data from web service providers by web service nicknames step-by-step. This article also introduces some of the restrictions of the Web services wrapper.
Articles 15 Aug 2013

1 - 100 of 3115 results | Next Show Summaries | Hide Summaries Subscribe to search results (RSS)