Best practices: Using High Performance Unload (HPU) as part of a Recovery strategy
svisser1 2700018UK9 Visits (5748)
The Warehouse team has been busy creating yet another best practice paper.
IBM InfoSphere Optim High Performance Unload (HPU) for DB2 for Linux, UNIX, and Windows V4.2 is a high-speed tool for unloading, extracting, and repartitioning data in DB2 for Linux, UNIX, and Windows databases. HPU unloads large volumes of data quickly by reading directly from full, incremental, and delta backup images or table space containers rather than through the DB2 software layer.
The database is not affected by the HPU process and table spaces do not need to be placed in an offline state. HPU can therefore offer advantages over other recovery methods in data availability and processing speed. Incorporating HPU into your recovery strategy for scenarios involving data loss or corruption can help achieve service level objectives for data availability and data recovery.
Incorporate InfoSphere Optim HPU into your recovery strategy to provide:
This article discusses how to incorporate HPU into a recovery strategy and when to use HPU to meet your recovery objectives. In addition the authors contrast the use of output files and named pipes, discuss introducing parallelism, review the HPU command and control file, and recommend best practices for creating control files. Finally, the article details how to install and configure HPU in an IBM Smart Analytics System.
This paper will be followed in the new year with a second paper on HPU that deals with data migration.
Thanks to the hard working team that includes: Garrett Fitzsimons, Konrad Emanowicz from the VLDB team who created and tested all the scenarios discussed in the paper and to Richard Lubell from the DB LUW ID team who steered the paper through to publication. Thanks also to Alice Ma, Austin Clifford, Bill Minor, Dale McInnis, Jaime Botella Ordinas, and Vincent Arrat whose contributions, feedback and experience were invaluable.
Thanks to Garrett and Serge for providing me with this information.
See also my previous posts on Best Practices: