Many of my blog entries had their origins in a question posed to me. Earlier this week someone asked me for some free resources that can be used to stay on the cutting edge of all DB2 changes. I went through recent blog entries and without much effort, I was able to come up with 10 different types of free stuff. Check it out:
- FREE - Replays of many of the "Chat with the Lab" webcasts are available on http://ChannelDB2.com
- FREE - Replays of the top rated shows from the past year on the DB2Night Show: http://bit.ly/i8qP49
- FREE - White Paper on HADR Read on Standby: http://ibm.co/eN9J4K
- FREE - Podcasts by authors of books about IBM products: http://ibm.co/erB0OC
- FREE - DB2 on Campus series of ebooks - 7 titles published, more coming. See all the details http://bit.ly/gzOowb
- FREE - 5 Flashbooks available.... find out how to get yours now: http://bit.ly/fjR4x2
- FREE - Absolutely fantastic technical presentations from DB2Night Show's DB2's Got Talent Competition: http://ibm.co/hcCH90
- FREE - Read 6 e-chapters in one free e-book. http://ibm.co/ejps1R BI, Data Modeling, MDM & more
- FREE - Learn from experts via Best Practice articles for DB2 Linux, UNIX & Windows. http://ibm.co/fBiT89
- FREE - Ten new IBM Redbooks that you may be interested in: http://ibm.co/hZDwXV
and now I get to add 2 more.
11. FREE - Learn from experts via Best Practice articles for DB2 for z/OS: http://ibm.co/eTXbRd
12. FREE – 3 certification exams for attendees of IDUG Tech Conference: http://ibm.co/giLoQA
That’s a lot of FREE! I hope you like.
The Warehouse team has been busy creating yet another best practice paper.
Best practices: Using High Performance Unload (HPU) as part of a Recovery strategy
IBM InfoSphere Optim High Performance Unload (HPU) for DB2 for Linux, UNIX, and Windows V4.2 is a high-speed tool for unloading, extracting, and repartitioning data in DB2 for Linux, UNIX, and Windows databases. HPU unloads large volumes of data quickly by reading directly from full, incremental, and delta backup images or table space containers rather than through the DB2 software layer.
The database is not affected by the HPU process and table spaces do not need to be placed in an offline state. HPU can therefore offer advantages over other recovery methods in data availability and processing speed. Incorporating HPU into your recovery strategy for scenarios involving data loss or corruption can help achieve service level objectives for data availability and data recovery.
Incorporate InfoSphere Optim HPU into your recovery strategy to provide:
- Reduced recovery times through targeted data recovery
Use the WHERE clause when selecting data from backup images to target only the data that was lost, compromised, or corrupted, reducing recovery time.
- Increased data availability and reduced resource usage during recovery scenarios
Unload data from a backup image and optionally process this data on a non-production system, maintaining availability of the query workload and reducing the effect on the production system.
- Integration with DB2® and Tivoli® Storage Manager (TSM) software
Query TSM to unload data from full, incremental, and delta backup images archived to TSM by the DB2 BACKUP command, reducing errors and facilitating outside recovery.
This article discusses how to incorporate HPU into a recovery strategy and when to use HPU to meet your recovery objectives. In addition the authors contrast the use of output files and named pipes, discuss introducing parallelism, review the HPU command and control file, and recommend best practices for creating control files. Finally, the article details how to install and configure HPU in an IBM Smart Analytics System.
This paper will be followed in the new year with a second paper on HPU that deals with data migration.
Thanks to the hard working team that includes: Garrett Fitzsimons, Konrad Emanowicz from the VLDB team who created and tested all the scenarios discussed in the paper and to Richard Lubell from the DB LUW ID team who steered the paper through to publication. Thanks also to Alice Ma, Austin Clifford, Bill Minor, Dale McInnis, Jaime Botella Ordinas, and Vincent Arrat whose contributions, feedback and experience were invaluable.
Thanks to Garrett and Serge for providing me with this information.
See also my previous posts on Best Practices:
More on DB2 for z/OS Best Practices
Best Practices: Active Data Warehouse Topology using InfoSphere Warehouse
New DB2 WLM Best Practice Paper
Best Practices: IBM Smart Analytics Systems
How do I handle multi-temperature data in my warehouse?
Updated Best Practices Paper: Database Storage
Best Practice Resources for DB2 z/OS
DB2 Best Practices – New or Updated in 2011
I can’t believe that it’s been so long since I blogged about the Best Practice papers! Kate Kurtz sent me four new ones today and I found that my last entry about this topic was posted April 2009: “Learn & Benefit from Others “. Here are the four newest ones:
Written by Matthias Nicola and Susanne Englert
Update Published January 2011
This paper provides principles and guidelines for using DB2® pureXML® to solve business problems effectively and to achieve high performance when managing XML data in enterprise applications. The examples illustrating the best practices are based on a real-world financial application scenario and demonstrate how to implement the guidelines. The examples can be easily adapted to other types of XML applications. The paper covers the following areas:
- Storage options for XML data to improve performance and storage efficiency
- Techniques for adding XML data into a DB2 database
- Techniques for querying and updating XML documents efficiently
- Techniques for using indexes over XML data with queries effectively
- Techniques for efficiently maintaining and monitoring an XML database
- Techniques for developing efficient pureXML applications
Written by Maksym Petrenko, Mike Winer, and Joyce Coleman.
Published February 2011
Summary: Data in a data warehouse can be classified according to its temperature. The temperature of data is based on how often it is accessed, how volatile it is, and how important the performance of the queries that access the data is. Hot data is frequently accessed and updated, and users expect optimal performance when accessing this data. Cold data is rarely accessed and updated, and the performance of the queries that access this data is not essential. Using faster, more expensive storage devices for hot data and slower, less expensive storage devices for cold data optimizes the performance of the queries that matter most while helping to reduce overall cost.
Learn about a strategy for managing a multi-temperature data warehouse by storing data on different types of storage devices based on the temperature of the data.
This article provides guidelines and recommendations for each of the following tasks:
- Identifying and characterizing data into temperature tiers
- Designing the data base in an IBM® Smart Analytics System environment to accommodate multiple data temperatures
- Moving data from one temperature tier to another
- Using DB2® workload manager (WLM) to allocate more resources to requests for hot data than to requests for cold data
- Planning a backup and recovery strategy when a data warehouse includes multiple data temperature tiers
The content of this article applies to data warehouses based on version 9.7 or later of DB2 for Linux®, UNIX®, and Windows®. All examples in the article refer to IBM Smart Analytics System and InfoSphere™ Balanced Warehouse® environments.
Written by Walid Rjaibi and Mark Wilding
Published February 2011
Summary: Public cloud computing is an emerging computing technology that uses the Internet and central remote servers to host data and applications. It allows consumers and businesses to use applications without installing them locally and access information from any computer with Internet access. Cloud computing allows for much more efficient computing by using centralized storage, memory, and processing. The benefits of cloud computing are clear, and so is the need to develop appropriate security for cloud implementations.
This article is important for all DBAs who are setting up or managing databases in a public cloud environment. The details and best practices in this article will help DBAs protect themselves and their companies from security leaks and exposures by applying a standard, high-grade security policy to all databases that are hosted in a public cloud.
The information in this article is organized into three main sections:
- The IBM data server security blueprint: The blueprint first positions data server security within the bigger enterprise security picture. This section also describes steps to develop and roll out a security plan.
- Threats and countermeasures: This section describes the most common threats that affect a data server, whether it is deployed in a traditional environment or a cloud computing environment. The section then recommends a set of countermeasures.
- Additional cloud-specific security challenges: This section examines the additional challenges posed to data security in a cloud environment, in particular the need for privileged user monitoring and data segregation.
Written by: Silvio Luiz Correia Ferrari, Marco Antonio Norbiato, and Joyce Coleman.
Published March 2011
Summary: The IBM Smart Analytics System family of offerings evolved from the InfoSphere Balanced Warehouse family of offerings. Both are based on the same storage and database design principles.
Learn about some of the frequently asked questions about system maintenance in IBM® Smart Analytics System environments and InfoSphere™ Balanced Warehouse environments.
The frequently asked questions are grouped into the following two categories:
- System administration
- Ongoing maintenance and upgrades
The questions and answers discussed here apply to several generations of IBM Smart Analytics System configuration. The article uses the term IBM Smart Analytics System except when referring to specific InfoSphere Balanced Warehouse configurations. Most content, however, applies to IBM Smart Analytics System configurations and InfoSphere Balanced Warehouse configurations. Unless otherwise indicated, all content applies to V9.5 and V9.7 levels of the InfoSphere Warehouse and DB2® software.
You can find all the best practice articles on the following website:
I hope you find this series of articles useful. I’ll keep the website bookmarked for updates and will blog as new articles are published.
I noticed that Conor O’Mohony just posted a similar entry so it gave me an idea to do the same. Thanks for the inspiration, Conor!
Last year on Dec 20, my blog had had 994,747 unique views since I started blogging on devworks. Today, I have 1,994,943. The difference is just over a million views! I will definitely reach 2 million views this week! Very exciting.
So, what posts did people like to read on my blog this year? Here are the entries based on number of views:
1. Flashbook: Understanding Big Data: Analytics for Enterprise Class Hadoop and Streaming Data
2. FREE stuff
3. DB2 Best Practices – New or Updated in 2011
4. Book Promotions for the Month of February 2011
5. Where can I get a Flashbook?
I’m happy that these 5 entries were well-read since I think that they contain valuable information and reflect the type of content that I like to post.
To complete the list, here are the all-time most read entries:
1. New White Paper: IBM DB2 and Canonical Ubuntu
2. Making it Big in Software
3. Certification: Getting Started Checklist
4. Certification 101
5. DB2 on Campus Book Series
I like these posts as well, but the certification posts are sadly out of date. I will update them in the next while so you have better content to read.
If you like what I tend to share, I encourage you to “follow” my blog. As of writing this entry, I have 192 followers and thank every single one of you. I’m in the midst of reading Antonio Cangiano’s book: Technical Blogging: Turn Your Expertise into a Remarkable Online Presence and am learning tips from his expertise. I would like to put many of his tips into practices in the next few months…. I wonder if you’ll notice? Asking you to follow my blog is just one of his tips!
Congratulations to the Best Practices team for putting out another excellent paper. This one is for Workload Manager: DB2 best practices: Implementing DB2 workload management in a data warehouse
What You’ll Learn:
- The best ways to implement a successful workload management solution using DB2 for Linux, UNIX, and Windows, Version 9.7.4 or higher.
- The information in the article reflects the latest experiences of IBM field personnel and customers within the data warehouse arena.
Using a staged approach, this article guides you through the steps needed to implement the best practices workload management configuration on IBM DB2 for Linux, UNIX, and Windows with sufficient controls to help ensure a stable, predictable system for most data warehouse environments. This initial configuration is intended to be a good base for implementing additional tuning and configuration changes as needed, in order for you to achieve your specific workload management objectives.
You’ll be presented with a set of definitions representing the different stages of maturity for a workload management configuration in a DB2 for Linux, UNIX, and Windows database. These stages range from stage 0 through to the advanced stage 3 configuration. A specific configuration template and process is provided as part of these best practices to enable customers to progress from a stage 0 configuration to a stage 2 configuration. General descriptions and advice are also given about common stage 3 scenarios.
The article assumes a novice beginner and describes the individual steps and mechanisms at each point. A more experienced user can condense many of the listed steps to move from stage 1 to stage 2, making the transition in days of elapsed time rather than weeks as the suggested timeline indicates in a later section.
The steps outlined are focused on the efficiency of the system as a whole, regardless of where the work itself comes from. It is important to note that achieving the goal of a stable system might not necessarily also result in the achievement of any individual application service-level agreement (SLA) or specific performance objectives for queries. These more granular objectives might require subsequent changes to the workload management configuration, such as outlined in the section on stage 3 scenarios, which is outside the main scope of this document.
This is not a tutorial on DB2 workload management capabilities and does not attempt to provide comprehensive guidance in addressing all possible scenarios where DB2 workload management might be employed. It also does not cover all features within the DB2 product that might be of use in controlling resource consumption. The scope of this article is focused on describing the system stabilization approach in some detail and provides some general guidance for common advanced scenarios.
Get the article now.
Paul Bird is a senior technical staff member (STSM) within the IBM Software Group development organization, sharing his time between the Optim and DB2 development organizations. Since 1991, he has worked on the inside of the DB2 for Linux, UNIX, and Windows product as a lead developer and architect with a focus on diverse areas such as workload management, monitoring, security, and general SQL processing. He recently became a member of the Optim development organization to expand his experiences. You can reach him at firstname.lastname@example.org.
Rimas Kalesnykas is a technical writer for DB2 for Linux, UNIX, and Windows. In the last 5 years, he was the documentation owner for a variety of DB2 subject areas, including the Command and API references, the Partitioning and Clustering Guide, Troubleshooting, and Workload Management. In 2008, he was a co-author of a best practices paper that describes how to improve data server utilization and management through virtualization. You can reach him at email@example.com.
Did you know?
- Best practices papers complement product documentation. and provide practical recommendations to help you implement IBM Information Management solutions
- Best practices are developed and tested by a wide ranging group of experts, including product architects, software developers and testers, information developers, field practitioners, and most importantly, real users.
- Best Practices webcasts allow users to watch presentations from technical experts without having to attend a live conference presentation.
- As new products and features are introduced, new best practices are published
- As more practical knowledge is gathered from real world environments, best practices are updated to reflect up-to-date information
See also my previous posts on Best Practices:
Best Practices: IBM Smart Analytics Systems
How do I handle multi-temperature data in my warehouse?
Updated Best Practices Paper: Database Storage
Best Practice Resources for DB2 z/OS
DB2 Best Practices – New or Updated in 2011
Did you know that data has temperatures? Can data run a fever? What happens if your data catches a cold? Or rather, turns cold from hot? What can and should you do? What techniques are available for optimizing performance in databases where some data is more popular (hot) than other data?
On Friday’s episode of the DB2Night Show, we'll be joined by various IBM experts who will help us answer these challenging questions! Kate Kurtz along with Maksym Petrenko and Jim Seeger from the IBM Smart Analytics Systems Best Practices team will share with us their expertise!
Topic: How do I handle multi-temperature data in my warehouse?
Speakers: Kate Kurtz, Maksym Petrenko, Jim Seeger
Host: Scott Hayes
Date: Friday, June 17
Time: 11:00 ET
Best Practices are essential to understanding how your should approach different situations in your IT environments. The DB2 LUW and IBM Smart Analytics System Best Practice teams are focused on understanding what situations our customers need help with, and then providing clear recommendations on how to implement the solutions.
One of the questions that comes up frequently is "How do I handle multi-temperature data in my warehouse?" This Friday, we will be discussing an overview of multi-temperature data management, and some of the strategies you can use to manage that data in your environment. There will also be a question and answer period - so take a look at the paper and store up your questions!
More about the speakers:
Maksym Petrenko joined IBM Toronto Lab in 2001 and has worked exclusively with DB2 software since then. As part of his career he was developer, technical support analyst, lab services consultant and beta enabler. His experience includes supporting clients with installation, configuration, application development, and performance issues related to DB2 Databases on Windows, Linux, and UNIX platforms. Maksym is a certified DB2 Advanced Database Administrator and DB2 Application Developer.
Jim Seeger is a Senior Software Engineer in the DB2 development group. Jim has worked on the DB2 kernel for almost 3 years. He has worked on the development of relational databases for 21 years. While working at IBM over the past 9 years, he has worked on storage area network file systems, storage virtualization products, high availability offerings and open standards storage management interfaces. Jim worked at NCR, Informix and EMC prior to joining IBM.
Katherine Kurtz has been a member of the DB2 development team since 1997 and is currently an Information Development Manger for DB2 LUW and the IBM Smart Analytics System for the DB2 LUW Development team at the IBM Canada Lab in Markham, Ontario. She holds a Bachelor of Mathematics (Computer Science and Pure Math) from the University of Waterloo and a Bachelor of Education from the University of Western Ontario. Katherine’s work is focused on understanding how customers use our products, and how IBM can make our products more consumable through the sharing of best practices that incorporate insight from IBM technical leaders in the Lab and in the field. Katherine also has deep experience in DB2 System Verification and Test, SAP/DB2 Integration, customer support, performance development and release management.
The DB2Night Show now has a LinkedIn Group - join us to stay up to date on our show and discuss show episodes! The group registration link can be found on www.DB2NightShow.com. We hope you will join us and participate as a studio audience member. One studio audience member will be randomly selected to win an electronic gift certificate! If you can't make the scheduled time, register anyways so that you get information about recorded replays.
Thanks for the information Kate!
Sadly I’m not at IDUG this year. I’ll definitely miss many very special people, the education, the networking, and the dine around event. For those of you who are there, let us know how it is going, and get full use out of your visit!
My good friend Kate Kurtz is attending IDUG this year. It is her first time and she is presenting a session. I’ve asked her to guest blog for me while the conference is going on. This is the first of her entries. I hope you enjoy:
My name is Katherine Kurtz, and it is my first time going to IDUG!
Susan invited me to write some guest blogs, to give the inside scoop on the various sessions at IDUG and what the experience is like. I work with a great team of people developing best practices for the IBM Smart Analytics System and InfoSphere Warehouse Environments, and I am currently a manager for DB2 and IBM Smart Analytics System Information Development.
I really believe that we should educate people - ourselves and our customers - about the best way to use our products, which is how I ended up submitting several sessions to IDUG. (Note to self: Be careful what you ask for! Preparation for IDUG was about 5x more work than I expected). Best Practices are all about taking what is tried and tested in the field, putting it together with the future directions our development team is working on and then working together to make a great product even better.
I love what I do - and it is great to always be learning.
It is the night before I leave and I am currently building my checklist.
- presentations ready? check (well - maybe not quite, I think I still want to tweak a few things)
- packed? check
- American $$ , just in case I get to go shopping... :-) check
What will I be doing while I am at IDUG?
I have two sessions I am co-presenting. Warehouse Best Practices In Action - The IBM Smart Analytics System and also Best Practices for Building a Recovery Strategy for your Data Warehouse. I was also asked to participate in a Special Interest Group panel discussion, focusing on Data Warehousing. I am sure there will be lots of interesting debate - and I hope I can think on my feet! There is also the first timer luncheon on Wednesday - where I get to go as an IDUG first timer and as an IBMer, plus I get food, which is always a good thing.
One of the things I liked the most about IOD last fall (which was also the first time I attended IOD) was the conversations I had with customers I met during different sessions. It was always interesting to hear their point of view, and I think it really changes the way I think about the products that we are developing. I hope IDUG will be the same way - lots of great discussions, and chance to learn a lot from others.
If you are attending IDUG, please be sure to stop by and say hi,
Thanks Kate! Looking forward to hearing more from you.
Others who are at IDUG and are blogging include:
Rebecca Bond: What do your car keys, Nemo and IDUG NA have in common?
There already seems to be a lot of traction for the hash tag #IDUG, so I put together a web page that searches for Twitter updates containing #IDUG. It’s pretty cool: http://bit.ly/kW3MoM or http://www.dbisoftware.com/idugna11.php
This tracks something like the newest/most recent 50 tweets, 8 second continuous updates.
Cristian Molaro: DBBuzz
See my previous entries about the conference. If you are there and would like me to blog about your experiences, let me know :)
Just a bit more about IDUG
Guest blogger: Beth – What IDUG Sessions are recommended in the DB2 for z/OS track?
All about IDUG DB2 Tech Conference - Anaheim
Rebecca Bond: IDUG NA – The Geeky Vacation!
Budgeting for IDUG North America conference as viewed by the DB2 Locksmith
PS… If you see Roger Miller, please give him a big kiss goodbye as he is retiring very soon …
Last week I blogged about the Best Practice Papers that were available for DB2 Linux, UNIX, and Windows: DB2 Best Practices – New or Updated for 2011. As a result of that entry, I found out that there are also Best Practices resources for DB2 for z/OS as well. Not only articles though, but lectures, podcasts, and publications. Thanks to Janet Ikemiya and David Salinero for providing me with this information!
The DB2 for z/OS best practices developerWorks webpage contains many of the recommendations and guidance of DB2 for z/OS subject matter experts. Material from frequent conference speakers such as John Campbell, Roger Miller, Bonnie Baker, Willie Favero and several others is located on this site. Over 25 web lectures (slides with audio), articles, and publications are organized into categories such as administering, security, and tuning.
The most recent additions are:
John Campbell's two part series of recommendations regarding optimizing DB2 insert performance. This web lecture includes a video with slides and audio by John. The slides and transcript are also available separately:
Optimizing insert performance, Part 1 (February 2011)
by John Campbell
This webcast presents trade-offs made between optimizing DB2 insert for better throughput versus space reuse. It explores questions regarding inserting rows sequentially or randomly, whether to sort into clustering order and the effects of indexes on insert performance. Learn best practices for the different types of DB2 table spaces and get the details on DB2 parameters and recommended values.
Optimizing insert performance, Part 2 (March 2011)
by John Campbell
This second webcast on optimizing insert performance reviews additional techniques for speeding up the process of inserting rows into DB2. Learn about methods such as using a large index pagesize, random index keys, removing unused indexes, logging, and efficiently creating unique identifiers. The webcast covers enhancements specific to DB2 10 (such as index I/O parallelism, unique index with INCLUDE, and in-line LOBs) that impact insert performance. Finally, the presenter summarizes the best practices from both part 1 and part 2.
Also added in 2011:
DB2 Statistics Data Collection, DB2 System Address Space CPU Time and WLM Settings, Data Set Open/Close Activity (January 2011)
by Florence Dubois
There are many choices when collecting statistics about the DB2 system environment. This webcast will give guidelines on which statistics to gather and how to analyze your system's health. The data collected from IFCIDs, CPU time from monitoring tools, WLM metrics, and data set open and close activity will all be explored. The recommendations given will lead to improved DB2 performance.
Hard Lessons Leaned From Customer Health Check Studies (January 2011)
by John Campbell
This webcast presents practical advice from DB2 health checks at customer installations. It describes problems encountered when dealing with WLM, disaster recovery, continuous availability, data sharing, DB2 restart, and storage tuning. It discusses the most common issues and proven methods to prevent problems.
Additional material is posted throughout the year so bookmark this page!
If you have a suggestion for additional best practice content, please send an email to the contact firstname.lastname@example.org.
Last week I blogged about the importance and benefit of word of mouth feedback via online reviews
. Today I want to remind you of the series of Best Practices documents
that are available for you.
Each Best Practice paper is designed to provide you with practical guidance for the most common DB2 product configurations. These papers were written by leading experts in IBM's development and consulting teams, and have been extensively tested.
Why learn via trial & error when you can be effective immediately by following the advice in these papers?
Here are some of the topics that you can find:
Don't forget to share your feedback on anything IM related. I'd suggest that you put feedback on the ChannelDB2.com site
, at the very least. You can also share your own insights and comments with other users of IBM DB2 Linux, UNIX, and Windows via the IBM Database Wiki
Discover the best practices
for IBM DB2 for Linux, UNIX, and Windows. Get practical guidance for the most common DB2 9 product configurations and use this knowledge to improve the value of your DB2 data servers.
Notice that updates and additions were made recently:"Database Storage" (October 2008) Updated!
Discover best practices for database storage including guidelines and recommendations for spindles and logical unit numbers (LUNs), stripe and striping, transaction logs and data, file systems versus raw devices, Redundant Array of Independent Disks (RAID) devices, registry variable and configuration parameter settings, and automatic storage."Physical Database Design" (May 2008)
Learn about all of the design features that relate to the physical structure of the database such as datatype selection, table normalization and denormalization, indexes, materialized views, data clustering, multidimensional data clustering, table (range) partitioning, and database (hash) partitioning."Information Modeling with Rational Data Architect Version 7" (October 2008) New!
Discover the best practices for information modeling based on the experiences of a team of data modelers using Rational Data Architect with DB2. Rational Data Architect is an information modeling tool that can help you establish and use standards in design based on proven design methodologies to create highly reusable models that satisfy business requirements."Data Life Cycle Management" (October 2008) Updated!
See the best practices to design and implement scalable solutions that facilitate the continuous feed or roll-in and roll-out of data, with minimal interruption of data access."DB2 High Availability Disaster Recovery" (October 2008) New!
Get recommendations for setting up and maintaining a DB2 High Availability Disaster Recovery (HADR) environment in ways that balance the protection HADR provides with performance and cost. This best practice explains optimizing for fast failovers, tuning parameters for network and logging performance, and understanding table reorganization methods and load operations in an HADR environment."Tuning and Monitoring Database System Performance" (October 2008) Updated!
See best practices for the performance evolution of a Data Server. From important principles of initial hardware and software configuration to monitoring techniques that help you understand system performance under both operational and troubleshooting conditions. For troubleshooting performance problems, this paper provide a step-wise, methodical method for determining the problem."DB2 Workload Management" (October 2008) New!
DB2 Workload Management helps prevent, detect, and resolve conflicting resource requirements on a DB2 data server based on defined business objectives. This paper details the current best practices for DB2 workload manager.
Do you ever wish that you could get practical guidance regarding the BEST way to use DB2 9 to improve the value of your DB2 data servers? For the past 6 months, the development teams have been creating Best Practice papers
to help you discover the best practices for the most common DB2 9 product configurations with IBM® DB2® for Linux®, UNIX®, and Windows®.
These Best Practice papers
present advice on the most optimal ways you can use DB2 to satisfy key business data processing needs. These papers are authored by leading experts in IBM's development and consulting teams, and have been extensively tested.
Each Best Practice paper
is designed to provide practical guidance for the most common DB2 9 product configurations. By applying these recommendations, you may improve the value of your DB2 data servers and align yourself with IBM's technical direction for DB2.
Currently available are:Database storage
, (May 2008)Discover best practices for database storage including guidelines and recommendations for spindles and logical unit numbers (LUNs), stripe and striping, transaction logs and data, file systems versus raw devices, Redundant Array of Independent Disks (RAID) devices, registry variable and configuration parameter settings, and automatic storage. Physical database design
, (May 2008)Learn about all of the design features that relate to the physical structure of the database such as datatype selection, table normalization and denormalization, indexes, materialized views, data clustering, multidimensional data clustering, table (range) partitioning, and database (hash) partitioning. Data life cycle management
, (May 2008)See the best practices to design and implement scalable solutions that facilitate the continuous feed or roll-in and roll-out of data, with minimal interruption of data access.Minimizing planned outages
, (May 2008)View the best practices for minimizing, or even potentially eliminating, planned database outages. Planned outages can include activities such as routine database maintenance activities or upgrades to your database environment. IBM data server security
, (May 2008)This Best Practices roadmap details how to protect data servers against the common data security threats, some uncommon threats, and useful countermeasures for these threats. Deploying IBM DB2 products
, (May 2008)Discover best practices for deploying the DB2 9.5 family of products on Linux, UNIX, and Windows platforms across multiple computers quickly, easily, and consistently. Writing and tuning queries for optimal performance
, (May 2008)Learn best practices for minimizing the impact of SQL statements on DB2 database performance. This paper focuses on good fundamental writing and tuning practices that can be widely applied to help improve DB2 database performance.Tuning and monitoring database system performance
, (May 2008)See best practices for the performance evolution of a Data Server. From important principles of initial hardware and software configuration to monitoring techniques that help you understand system performance under both operational and troubleshooting conditions. For troubleshooting performance problems, this paper provide a step-wise, methodical method for determining the problem. Managing XML data
, (May 2008)Get principles and guidelines for using DB2 pureXML™ to solve business problems effectively and to achieve high performance when managing XML data in enterprise applications - illustrated with a real-world example easily adapted to other types of XML applications. See Downloads for the files associated with this paper. Improving data server utilization and management through virtualization
, (May 2008)Discover best practices for deploying DB2 9 with the IBM System p™ virtualization technology. Select the right blend of virtualization features and their configurations to achieve desired business goals, while improving the utilization of IT resources. Frequently asked questions from problem management reports
, (May 2008)Get answers to the most common questions that DB2 customers have about the DB2 data server.
Additional Best Practice papers
are in development, and will be published on developerWorks as they become available.
Congratulations and THANKS to all the authors for their very valuable contribution to these papers!
PS... see also Kate's blog