The Warehouse team has been busy creating yet another best practice paper.
IBM InfoSphere Optim High Performance Unload (HPU) for DB2 for Linux, UNIX, and Windows V4.2 is a high-speed tool for unloading, extracting, and repartitioning data in DB2 for Linux, UNIX, and Windows databases. HPU unloads large volumes of data quickly by reading directly from full, incremental, and delta backup images or table space containers rather than through the DB2 software layer.
The database is not affected by the HPU process and table spaces do not need to be placed in an offline state. HPU can therefore offer advantages over other recovery methods in data availability and processing speed. Incorporating HPU into your recovery strategy for scenarios involving data loss or corruption can help achieve service level objectives for data availability and data recovery.
Incorporate InfoSphere Optim HPU into your recovery strategy to provide:
- Reduced recovery times through targeted data recovery
Use the WHERE clause when selecting data from backup images to target only the data that was lost, compromised, or corrupted, reducing recovery time.
- Increased data availability and reduced resource usage during recovery scenarios
Unload data from a backup image and optionally process this data on a non-production system, maintaining availability of the query workload and reducing the effect on the production system.
- Integration with DB2® and Tivoli® Storage Manager (TSM) software
Query TSM to unload data from full, incremental, and delta backup images archived to TSM by the DB2 BACKUP command, reducing errors and facilitating outside recovery.
This article discusses how to incorporate HPU into a recovery strategy and when to use HPU to meet your recovery objectives. In addition the authors contrast the use of output files and named pipes, discuss introducing parallelism, review the HPU command and control file, and recommend best practices for creating control files. Finally, the article details how to install and configure HPU in an IBM Smart Analytics System.
This paper will be followed in the new year with a second paper on HPU that deals with data migration.
Thanks to the hard working team that includes: Garrett Fitzsimons, Konrad Emanowicz from the VLDB team who created and tested all the scenarios discussed in the paper and to Richard Lubell from the DB LUW ID team who steered the paper through to publication. Thanks also to Alice Ma, Austin Clifford, Bill Minor, Dale McInnis, Jaime Botella Ordinas, and Vincent Arrat whose contributions, feedback and experience were invaluable.
Thanks to Garrett and Serge for providing me with this information.
See also my previous posts on Best Practices: