DB2 for Linux, UNIX, and Windows Best Practices
fitzgarr 27000288SG Tags:  industry database best models practice db2 datawarehouse design 11,270 Views
The table below, an excerpt from the best practices paper “Transforming IBM Industry Models into a production data warehouse” describes guidelines for implementing an intelligent table space design strategy that gives you the flexibility you need to meet your service level objectives for all workloads; not just query, but backup, archive, maintenance, recovery, and ETL.
sboivin 060000F8YY Tags:  data_growth best_practices db2 distribution_key database_design 2 Comments 10,617 Views
The best practice paper Managing data growth provides a wealth of recommendations to help you design and manage a database environment for efficient data growth, including tips on how to choose the right distribution key for a partitioned database.:
Database partitioning helps you to adapt to data growth by providing a way to expand the capacity of the system and scale for performance. A distribution key is a column (or group of columns) that is used to determine the database partition in which a particular row of data is stored. The following guidelines will help you to choose a distribution key.
If you have any comments or questions for the authors of this best practice paper, feel free to log a comment on the paper's summary page and we will respond. You need to login with your IBM ID to be able to enter comments. Registering your Id is free and easy at developerWorks.
The new video from the DB2 team, "Getting up and running with HADR", provides a demonstration of how straightforward it is to set up HADR.
As we set up HADR in the video, we provide insight into some of the more important configuration decisions we are making, hopefully heading off some of the more common issues users face when setting up HADR.
Watch the full video on YouTube: http://youtu.be/P4JCd2a4uWk
Enda.McCallig 2700028UUP Tags:  database_design table_partitioning db2 range_partitioning best_practices 8,154 Views
Table or Range partitioning is a powerful feature of DB2 that facilitates good database design principles that can help lead to easier maintenance operations, increased data availability and more optimized queries.
But why is table partitioning so good in a warehousing environment? Here are some reasons:
1. Range-specific maintenance operations
Where data partitions (ranges of data within a table) are placed in individual table spaces, maintenance operations can be targeted at active data only.
Many DB2 commands, for example REORG and BACKUP, can be specified to execute against specific table spaces or data partitions. This can significantly reduce maintenance times.
2. Data lifecycle management
Table partitioning large fact tables by date means that older data in a table can be detached as an online operation. This can help eliminate the need to perform costly delete statements.
In addition, as data ages, it can be moved as an online operation to less costly storage. In db2 v10 this multi-temperature data management is facilitated by the new storage groups feature.
3. Partition elimination
Range Partitioning benefits queries where the query spans one or a subset of the range partitioning key.
The DB2 optimizer can then eliminate entire data partitions from the query, and this reduction in rows/read (I/O) can help increase query performance.
4. Local indexes
Local indexes can help to significantly reduce index maintenance and increase query performance where significant sorting is not required.
In addition, local indexes can be placed in separate table spaces which provides more flexibility in building a backup schedule and a recovery strategy.
For example, in a restore scenario you have the choice between restore or rebuild of indexes.
5. Backup performance
Backup performance can be improved by backing up just those table spaces (table ranges) that are active.
By balancing the average size of your table spaces, parallelism within the BACKUP operations can also increase, helping to reduce the elapsed time of your backup operations.
These and other warehouse design issues are discussed in our many papers on warehousing. If you have any comments or experiences you would like to share with the authors, please leave a comment.
sboivin 060000F8YY Tags:  middleware db2 workload fields wlm info client management 7,629 Views
A new supplement to the popular DB2 best practices paper "Implementing DB2 Workload Management" has just been published. The supplement will help you set the DB2 client information fields for a variety of common middleware applications.
You can find it, along with other useful supplements, on the paper's information web page:https://ibm.biz/Bdx2n6
The DB2 client information fields are available on each connection to a database. These fields enable an external application that is using a connection to provide additional information to the DB2 database server that can be used to discriminate among connections based on end-user identification. The values in the client information fields are reported by DB2 for Linux®, UNIX®, and Windows® and other members of the DB2 family through various database monitoring and auditing interfaces. They are also leveraged by the DB2 workload definition in DB2 for Linux, UNIX, and Windows Version 9.5 and later as another way to aggregate connections to the database for purposes of monitoring and control.
Share your impressions and questions about the paper and supplements by adding comments to the web page (you need to join developerWorks and login first).
New best practices paper: "Building a data migration strategy with IBM InfoSphere Optim High Performance Unload"
sboivin 060000F8YY Tags:  "data_migration_strategy" "ibm_smart_analytics_syst... "infosphere_warehouse" "hpu" "data_migration" "data_warehouse" "high_performance_unload" "db2" "puredata_for_operational... 2 Comments 7,143 Views
Announcing a new best practice paper: "Building a data migration strategy with IBM InfoSphere Optim High Performance Unload"
This paper addresses the topic of data migration and how you can use HPU to build a data migration strategy that can be scheduled to be migrated, automatically, from source to target database with no manual steps.
No longer do you have to grapple with reserving large amounts of storage capacity on the source or target database to stage data; no longer do you have to worry about preserving identity (surrogate) keys; no longer do you have to worry about generating subsets (ranges) of data to be migrated; and no longer do you have to worry about different DB2 software levels or distribution maps.
This newly published and second paper on HPU, the first paper looked at using HPU as part of a recovery strategy, looks at how you can build and implement a data migration strategy using HPU. In testing the recommendations in this paper, we used both an IBM Smart Analytics System and an IBM PureData for Operational Analytics System.
sboivin 060000F8YY Tags:  db2 practices best webcast purescale monitoring performance 6,687 Views
You are a busy professional and you don't always have the time and resources to travel to a technical conference, or call into a live web presentation, to listen to technical experts give great presentations about the products and technology you care about. Recorded webcasts offer you the benefits of listening to the same experts, at your own convenience, at the office or at home.
The tips and techniques presented in this webcast reflect information validated through the DB2 team's internal performance testing, as well as performance benchmark tests and customer engagements in real life DB2 pureScale environments.
The popular DB2 best practices paper DB2 databases and the IBM General Parallel File System is now updated and includes DB2 V10.1 support.
The examples in this paper are based on DB2 V10 fix pack 2 and GPFS 126.96.36.199 efix13 installed on AIX 6.1 TL6 SP5 but can be extended to more recent versions and other supported platforms. All versions of GPFS are supported with DB2 for Linux, UNIX, and Windows, however, the latest supported fix packs are recommended to ensure the best quality experience.
Technical paper summary:
In today’s highly competitive marketplace, it is important to deploy a data processing architecture that not only meets your immediate tactical needs, but that also provides the flexibility to grow and change to adapt to your future strategic requirements. To help reduce management costs, add flexibility, and simplify the storage management of your DB2® for Linux®, UNIX®, and Windows® installation, you need to choose a file system that is designed to provide a dynamic and scalable platform. The IBM® General Parallel File System™ (GPFS™) is a powerful platform on which to build this type of relational database architecture. This paper describes why GPFS is the right file system to use with DB2 databases by outlining the benefits and providing best practices for deploying GPFS with DB2 software. In addition, a section has been added to this paper to describe the DB2 pureScale feature, and how it configures and uses GPFS.
sboivin 060000F8YY Tags:  installation paper db2 technical silent new best_practices 6,878 Views
We are pleased to announce the publication of a new DB2 for Linux, UNIX, and Windows best practices paper: DB2 V10 silent installation and uninstallation.
You can use DB2 silent installation and uninstallation to install or uninstall DB2 products and components without user interaction. Silent installation is useful for large-scale deployments of DB2 product editions. It is also useful when you need to embed the DB2 installation and uninstallation processes within the installation process of solutions that include DB2 products.
This paper covers the following tasks:
Share your impressions and questions about this paper by adding a comment on the paper's web page: https://ibm.biz/Bdx8Hr You will need to login to developerWorks with your IBM ID first.