Contents


Manage performance and administer DB2 databases using IBM Data Server Manager

Overview and an example use case for performance management

Comments

Overview

In this tutorial, we will touch on the simple setup experience for Data Server Manager and the extended feature capabilities, then we will walk through a specific example of investigating a performance alert, which will highlight several key features.

A solution for enterprise database management

To effectively manage DB2 for Linux, UNIX, and Windows databases across an enterprise, businesses can end up with a collection of disparate, narrow-scope tools for monitoring and administering those databases. Some of the most common feedback we hear from DBAs and IT staff:

  • There are too many tools, too many installations, too many repositories.
  • Tools need to be easier to use and need to support the team's workflow, provide diagnostic smarts, and give actionable recommendations.
  • Tools need to be more scalable and perform better.
  • Tools must support managing databases across the enterprise.

Feedback like this provided the key requirements into the design and development of Data Server Manager which aims to be simple, scalable, and smart by being able to:

  • Deliver a simplified user experience — from up-and-running to day-to-day use.
  • Provide information via a single tool with a common integrated web console — not multiple tools with their own interfaces.
  • Provide familiar capabilities from other InfoSphere® Optim™ database tools that enterprises depend on — capabilities such as performance monitoring and tuning, configuration, storage optimization, and database administration.
  • Use expert analytics to help identify, diagnose, solve, and prevent performance problems.

Up and running with IBM Data Server Manager

IBM Data Server Manager is designed to get you up and running quickly. The setup program has only three steps, so you can use Data Server Manager within minutes.

Users and access

During setup, you specify a default login name for the Data Server Manager tool. This user ID is considered the default administrator for the tool, and it has all privileges to perform all functions within the tool. This user ID does not need to exist in the OS.

You can create additional users for login to Data Server Manager, and you can specify which databases they can work with. For direct access to a database itself, such as when running SQL or issuing other administrative commands to the database, users will still be required to provide their own DB2 credentials for that database. For example, you might have a group of DBAs who need only to view performance data but not to administer the database. This kind of separation of duties is fully supported in Data Server Manager.

Configuration

After you run the setup and log to in the web console, you need to define the databases you want to manage. Data Server Manager provides several ways to define these databases.

Image shows menu of choices for adding databases
Image shows menu of choices for adding databases
Add a database connection manually
You can define new database connections or modify existing connections at any time.
Discover the databases
Using DB2's discovery feature, the tool can find databases in your enterprise and add them automatically. Image shows Discover Databases dialog showing three databases
Image shows Discover Databases dialog showing three databases
Import connections
The tool can import connections from a delimited text file that was exported from other InfoSphere Optim Database tools.

After a database connection is defined, it is immediately ready for use in the tool. From the left-side navigation menu, the two primary functional areas are:

Monitor
A default performance monitoring profile is applied, for monitoring and alerting out of the box.
Administer
A DBA can view and act on the database objects, edit and run SQL, create and manage database jobs, and view the database diagnostic logs.
Image shows three database tiles are shown on the home page with the menu expanded
Image shows three database tiles are shown on the home page with the menu expanded

Extending the capabilities of Data Server Manager

This repository is used to store historical data and for additional functions, such as advanced performance analytics, alerts based on deviations from historical states, and robust tuning and optimization capabilities.

Any supported DB2 for Linux, UNIX, and Windows database can be used as a repository, and it's a simple one-screen setup to make the switch. You have complete control over when to collect the monitoring data, how long to keep it, and which databases you want to collect it for.

Scenario: Solving a performance issue

Let's walk through some of the data monitoring, optimizing, and administration capabilities of Data Server Manager in a scenario.

Scenario:

Eric is a DBA at the Sample Outdoor company. He's monitoring hundreds of databases for production, development, and test teams. He is using the full capabilities of Data Server Manager Enterprise Edition with a repository database.

One day, while monitoring the health of the databases in the enterprise, Eric notices that Data Server Manager has raised an alert for GSDBX, the database that is used by the development team for a new project. When the alert appears on the home page, Eric investigates the problem.

Viewing the alert

The Data Server Manager Home page shows a tile for each monitored database. Each tile shows some key metrics on gauges and an alert count indicator.

  1. On the GSDBX tile, Eric notices the red (critical) alert count and a large increase in the I/O gauge. He clicks the alert count in red to see what happened. Image shows GSDBX                             tile showing one red alert
    Image shows GSDBX tile showing one red alert

    The alert brief shows that a statement deviated from its past known performance.

    Image shows an alert showing current performance of the statement deviated from its past performance
    Image shows an alert showing current performance of the statement deviated from its past performance
  2. Eric clicks the View details link to get more information about this statement alert. The alert details identify some specific metrics that deviated compared to past executions of this statement. Eric also notices that the Data Server Manager analytics suggest that a missing index could be contributing to the performance degradation. Image shows statement performance details show up by clicking the details link
    Image shows statement performance details show up by clicking the details link
  3. Eric looks at the Recommended Actions tab for the alert. One of the possible suggested actions is to view the access plan. Image shows a list of suggested actions show up when the Recommended Actions link is clicked
    Image shows a list of suggested actions show up when the Recommended Actions link is clicked

Reviewing the access plan

Eric selects the View access plan link in the alert, which runs the Explain function and generates an access plan graph.

Image                     shows access plan graph
Image shows access plan graph

In the access plan graph, Eric notices a lot of table scans. He is surprised because he thought the tables that are involved in the query already have indexes. The high number of table scans is also consistent with the large increase in reads mentioned in the alert and with the increase in I/O that was shown on the Home tile gauge.

Investigating the problem statement

The alert also includes a link to tune the single statement, but before Eric does the tuning, he decides to look at some details about the historical performance of this statement.

  1. Eric closes the alert dialog, clicks the GSDBX tile to open the monitoring details for the GSDBX database, and goes to the Executed Statements view. He sorts the statements by average rows read and sees the statement that triggered the alert. Image shows view showing top three executed statements
    Image shows view showing top three executed statements
  2. Because Eric wants to see how this statement behaved in the past, he switches to the Historical view to analyze the data for the last three hours. Image shows view showing real time drop down menu
  3. He drills down to look at the historical details for the interesting statement, and he sees that the performance really was better in the past. The blue bars show the statement execution time stacked with other metrics, but the darker blue for the statement execution time is the most prominent value. The red line is the execution count — which Eric sees was high while the execution time was low. When the execution time went up (the statements got slower), naturally the execution count dropped too. Graph shows statement time details
    Graph shows statement time details

Eric recalls that the alert suggested the likelihood of a missing index, and he can see that the statement performed well before. Could this problem be a case of a dropped index?

Tracking database changes

Because Eric enabled the track changes feature for the GSDBX database, he knows that he can easily check whether any indexes were dropped:

  1. Eric opens the Track Changes page to look at recent changes on the GSDBX database. He sees that three indexes were deleted. Image shows track changes page showing three indexes deleted
    Image shows track changes page showing three indexes deleted
  2. He views details for the index objects and finds that the following three indexes were deleted: IDXONE, IDXTWO, and IDXONE_U. The timestamp is shown in GMT. Image shows track changes page showing details of the                             deleted indexes
    Image shows track changes page showing details of the deleted indexes

Tuning the statement

Now that he sees evidence of deleted indexes, Eric is ready to tune the statement to see if Data Server Manager has any new index recommendations:

  1. Eric goes back to the alert he was looking at before, and opens the Recommended Actions tab again. The last time he opened the alert, he viewed the access plan and saw many table scans. Now he will choose the option to tune the statement. Image shows recommended actions view showing how to tune statement
    Image shows recommended actions view showing how to tune statement
  2. Eric names his query, chooses to re-explain the statements, chooses to get recommendations for statistics and indexes, and clicks Run. Image                             shows advanced options window showing which recommendations to select and which statements to re-explain
    Image shows advanced options window showing which recommendations to select and which statements to re-explain

    Eric can go about his other work while the tuning job is running as a background job.

  3. When the job is complete, Eric reviews the tuning results. The advisor has no recommendation for statistics because they are current. The advisor recommends one index, with a projected cost improvement of 99.79 percent.

    The left graph shows the estimated rows read by the statement without and with the recommended index based on the current statistics. The chart on the right gives an overview of the top operations by total cost; in this case, the operations will be all nested join operations using the recommended index.

    View shows tuning results in graphs
    View shows tuning results in graphs
  4. Eric clicks the index recommendations number to view the details. View                             shows index recommendations details
    View shows index recommendations details

    Eric notices that the recommended index matches one of the deleted indexes that he found earlier in the Track Changes report.

  5. Eric decides to check the impact of the new index. Although Eric could run the Data Definition Language (DDL) right now, he wonders what impact this new index will have on the other queries running in the system — after all he tuned only that one statement. He clicks the Impact Analysis button to submit a job for the analysis.

    The Impact Analysis job examines whether this new index will affect any other statements collected in the past. The tuning analytics engine will correlate these statements and determine if the originally recommended index would benefit other statements or if there is another index or set of optimal indexes for these related statements.

  6. When the Index Impact Analysis job is complete, Eric views the results. The Original Index Recommendations section shows that this new index would impact three statements (two other statements in addition to the alerted statement.) In the Optimal Index Recommendations section, he sees a second index, and based on the key columns, he notices that it matches another of the indexes that was deleted. View shows index impact analysis results
    View shows index impact analysis results

    These two indexes also impact three statements.

  7. Eric clicks View Statements to see what the other statements are, and he sees they are all projected to have significant performance gains if the new indexes are created. Table shows estimated performance gains of affected statements
    Table shows estimated performance gains of affected statements

    In this example, the original and optimal recommendations have almost the same projected performance gain. Eric wonders whether it would be worth the extra resources to create that second index if the performance gain is only so slight. However, those other deleted indexes are still bothering him, and while he's pleased and optimistic about the analysis results, he definitely wants to review the issue with his team.

  8. Eric clicks the Edit and Run DDL button for the optimal indexes, to see the generated statements. He could execute the DDL to create the indexes right from the tuning job results screen, but he wants to review the DDL with his team and would need to create these indexes during off hours anyway to minimize the impact to the environment. Eric copies the recommended index DDL into his clipboard, and he cancels the window.
  9. Next, Eric wants to save the DDL because he will need it later. Eric clicks Administer > SQL Editor from the navigation menu to open the SQL Editor and then he pastes the index DDL into the work area. View shows work area in SQL editor work area where the optimal DDL is pasted
    View shows work area in SQL editor work area where the optimal DDL is pasted

    He clicks Save to save the DDL to a file on his local machine so that he can email it to his coworkers for the team review meeting.

Running DDL from the SQL editor

At his staff meeting, Eric explains that he saw an alert about a statement that went bad, and that the track changes report identified three deleted indexes. The deleted indexes are very likely the reason that the statements are performing worse than before. The tuning wizard recommends to create two indexes, which match those that were deleted. He has the new DDL all ready to go for two of the indexes, but he wants to know if anyone knows what happened. One of Eric's teammates realizes that he accidentally deleted all three indexes, but only meant to delete the IDXONE_U index. Oops!

After some more discussion, Eric is now confident that he can re-create the other two dropped indexes to get the database back to its intended state:

  1. Eric logs back on to Data Server Manager web console, opens the SQL Editor and imports the index DDL from the saved file into the work area. He modifies the recommended index names to match the original names of the indexes that were deleted: IDXONE and IDXTWO.
  2. After making his changes, he validates the syntax, just to make sure that there aren't any mistakes. View shows successful validation of syntax of the recommended index names
    View shows successful validation of syntax of the recommended index names
  3. Next, he clicks Run to execute the DDL statements. Data Server Manager prompts for credentials to the GSDBX database, to be sure that Eric is allowed to make such a change. The DDL is executed successfully. View                             shows successful run results of DDL statements
    View shows successful run results of DDL statements

Verifying the improvement

After the indexes are restored, Eric restarts the workload to confirm that the statements are again performing well. He can verify this behavior in several ways:

  1. From the Executed Statements view, Eric observes that the number of rows read has decreased significantly because the index is being used. View shows executed statements details
    View shows executed statements details
  2. Eric runs the visual explain again for the original statement identified in the alert. He sees the access plan graph now shows that the indexes are being used. Image shows restored indexes graph
    Image shows restored indexes graph
  3. Eric looks at the Time History graph for the statement from the time before the indexes were dropped to the current time after they were re-created, and sees that the query is now performing more like its normal behavior. Bar graph shows time history after indexes were restored
    Bar graph shows time history after indexes were restored

After the workload runs a few times, Data Server Manager detects the improvement and closes the original performance alert.

Conclusion

In this tutorial, we showed an example scenario of how a database administrator can use Data Server Manager to efficiently monitor, analyze, identify, and solve a performance issue by using an integrated user interface without switching from tool to tool. In the process, we highlighted some key features of the tool, including smart alerts with solution guidance, performance tuning with index impact analysis, configuration change tracking, performance analysis with historical view, and SQL Editor.


Downloadable resources


Related topics


Comments

Sign in or register to add and subscribe to comments.

static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Information Management
ArticleID=1000857
ArticleTitle=Manage performance and administer DB2 databases using IBM Data Server Manager
publish-date=03202015