Manage your Informix database with the IDS OpenAdmin Tool, Part 1: Configuring and using the OpenAdmin Tool with IDS

Get started with OAT

IBM® Informix® Dynamic Server (IDS) is well known for ease of administration. To make server administration more user friendly, a new open source, platform-independent tool called OpenAdmin Tool (OAT) is now available to IDS users The OpenAdmin Tool provides graphical interface for administrative tasks and performance analysis tools. Introduced with IBM Informix Dynamic Server version 11.10, it is a PHP-based, Web browser administration tool that gives you the ability to administer multiple database server instances from a single location. This article highlights some of the most useful features of OAT and helps you get started.

Dhanashri Kudgavkar (dkudgavk@in.ibm.com), Informix-Interoperability Team, India Software Lab, IBM

Dhanashri Kudgavkar photoDhanshri Kudgavkar works for the Interoperability team, ISL, India. She has extensive experience with IDS behaviors and features.



Prasanna Alur Mathada (amprasanna@in.ibm.com), Informix-Interoperability Team, India Software Lab, IBM

Prasanna Mathada photoPrasanna Mathada is a certified Informix professional. He currently works for the Interoperability team, ISL, India.



Amitava Chakraborty (amitacha@in.ibm.com), Informix-Interoperability Team, India Software Lab, IBM

Photo: Amitava ChakrabortyAmitava Chakraborty has more than eight years of experience on the different Informix products. Currently, as a lead on the Informix Interoperability team, he is responsible for ensuring Informix scalability to other products.



10 July 2008

Also available in Chinese Russian

Introduction

IBM Informix Dynamic Server has many features that cater to a variety of user groups, including developers and administrators. One of the strong features of IDS is the low administration cost. IDS is well known for its hands-free administration. To make server administration even easier, a new open source, platform-independent tool called OpenAdmin Tool (OAT) is now available to IDS users. The OAT includes a graphical interface for administrative tasks and performance analysis tools. Introduced with IBM Informix Dynamic Server version 11.10, it is a PHP-based, Web browser administration tool that gives you the ability to administer multiple database server instances from a single location. Some of the very useful features of OAT include SQL Explorer, Session Explorer, Task Scheduler, Health Center, Space Administration, Server Administration and many more. In addition to the built-in features provided by OAT, users can easily plug in their own extensions to create the desired functionality. This article highlights some of these very useful features of OAT and helps you get started.

Prerequisites

The OAT requires the following products be installed. The versions in parenthesis indicate the versions with which the OpenAdmin Tool has been tested. Figure 1 shows how these products support the OAT.

  • A Web server (Apache 2.2.6)
  • IBM Informix I-Connect or IBM Informix Client SDK (3.00)
  • PHP 5.2.4 (minimum) compiled with PDO, PDO_SQLITE, GD and SOAP-enabled (5.2.4)
  • Informix PDO Module
Figure 1. OpenAdmin Tool Stack
Open_Admin_Tool_Stack

You'll find Installation instructions for the OpenAdmin Tool in the readme file that comes with the tool (see Resources).

Log in and connect to the IDS Server

Once installation is complete, go to the OAT root URL (for example, http://SERVERNAME/LOCATION where SERVERNAME is the name of the server where the Web server is running, and LOCATION is the location of OAT within your Web server document root directory where the tar or zip of OAT was extracted). Figure 2 shows the login screen:

Figure 2. Login screen
Login Screen

There are two radio buttons at the top left corner of the login screen, Login and Admin. The default is Login. The Group field is a list box containing the connection group names. You can use the default group, or you can have a user-defined group to which all your server connections belong. The configuration of the group, details of the server and its password can all be set by choosing the options that are made available when you select the Admin radio button.

The Get Servers button on this screen brings up a list box containing all the server connections that belong to the selected group. Select the box which populates the right side of the screen. You can now start working with the chosen server connection by clicking on the Login button on the right of the screen.

Choosing the Admin radio button switches you to the Admin screen. The Admin screen provides options to configure OAT, manage connections, and so on.

Figure 3. Open Admin menu
OAT Admin Screen

OAT Config option

In the Open Admin menu, select OAT Config to set the parameters needed to get OAT up and running. Figure 4 illustrates the OAT configuration screen.

Figure 4. Configuration screen
OAT Config Screen

On this screen, you can set the following parameters:

Table 1. Configuration parameters
Default languageThe language by default is English. The user is provided with an option to switch to Deutsche
Base URLThe URL with which the user brings up the admin console of OAT
Install directoryLocation where all the files and folders specific to the OAT are placed after the installation
Connections database locationThe database details provided to make connection with the server are stored
Pinger intervalValue can be set based on the requirements and environment. Default value is 300
IDS protocolProtocol can be chosen between onsoctcp and ontlitcp. (onsoctcp is the default value).

Manage connections:

Using Manage Connections, you can add a group, add a connection or view the connection admin.

  • Add Group

Under Manage Connection, select Add a group. Your screen should be similar to that in Figure 5.

Figure 5. Add a new group
Add Group

To add a IDS server group, provide a server group name and a server group password of your choice. If you select Read only, it disables any administration privileges (such as for Adding a chunk) in OAT, for all servers defined within that group. Decide whether to mark the group as "Read only" and click Add to confirm and add the group.

  • Add Connection

The following parameters need to be in place in order to add a new IDS server connection.

Table 2. Parameters to add an IDS server connection
Parameter Description
GroupEach server connection will be a part of defined groups. If no group name is selected, the new server connection will be a part of the 'Default' group.
UsernameUsername of the host machine where IDS server is located
PasswordPassword specific to the respective username
Informix ServerName of the IDS server instance
Host NameName of the host machine which hosts the IDS server
PortPort number specific to IDS server
AttributesHere, decide whether to add the logical log after the current log or to append it at the end of the logs

The values of latitude, longitude and IDS port help to locate the Informix server on the map.

Select the Test the connection button to test your connection. If any of the values provided against the parameters are wrong or if the server is down, then the user might see the following error message.

Figure 6. Sample connection error
Connection Error message

If the Informix server is running and the values are correct, a successful connection is established. You will get a success message that reads "online" as Figure 7 shows.

Figure 7. Sample connection success message
Connection Success message

Once this new connection is successfully tested, click on Save to add the connection to the respective group chosen.

  • Connection Admin

The Connection Admin (Figure 8) allows you to delete a group that is no longer required. Select the group to be deleted by clicking on the check box located next to the group name and then selecting Delete. This removes the entire group and connections configured within it. This operation is effective in a scenario where the administrator has a number of groups to manage.

Figure 8. Connection Admin screen
Connection Admin Screen

OAT login

This option links back to the OAT login screen explained earlier. As an alternative step, you can directly click Login to reach the OAT login screen.

Health center

This facility gives you an overall impression of the system and server health. The Health Center is comprised of alerts and a dashboard.

Alerts

If you click Health Center > Alerts, you will see a screen similar to that in Figure 9. You can select the severity, alert type and state of the alerts you want to view. There is also an option to show the different types of alerts in different colors to increase visibility and clarity.

Figure 9. Health center alerts
Health Centre Alerts

Manage dashboard

If you select Dashboard as your option, you will get a reference screen on system-related information. The Dashboard option gives you two tabs to choose from --- either Default or Space. The Default tab (Figure 10) shows the current memory consumptions and the transactions related information on the systems, whereas the Space tab (Figure 11) contains the information on the usage of dbspaces as well as locks.

Figure 10. Health center dashboard - Default
Health Centre Default Dashboard
Figure 11. Health center dashboard - Space
Health Centre Spacet Dashboard

Logs

This facility helps you view different logs related to the database server and is comprised of the admin command, the online messages, and the OnBar activity.

  • Admin command

If you select the Admin command option, you will get a screen similar to Figure 12 that tabulates the execute status of different commands which were scheduled through dbcorn.

Figure 12. Admin Command
Logs Admin Command
  • Online messages

If you select the online messages option, you are shown the contents from the online log for the database server (Figure 13). The error and warning messages are shown in red and yellow, respectively.

Figure 13. Online messages
Logs Online Messages
  • OnBar activity

If you select OnBar activity, you are shown the contents from the file mentioned respective to the configuration variable BAR_ACT_LOG for the database server (Figure 14). This log is generated while you implement the backup through the OnBar utility. The error and warning messages are shown in red and yellow, respectively.

Figure 14. OnBar activity log
Logs OnBar Activity

Task Scheduler

The Task Scheduler feature of the OAT allows you to manage and run scheduled maintenance, monitoring and administration tasks at predefined times or as determined internally by the server. You can monitor activities (for example, checking free log space) and create automatic corrective actions. Scheduler functions collect information and monitor and adjust the server, using an SQL-based administrative system and a set of tasks.

The Task Scheduler is defined and driven by tasks. The sysadmin database which is a logged database contains tables that store Task Scheduler information. By default, only an Informix user is granted access to the sysadmin database. The task properties are stored in the ph_task table in the sysadmin database. Each row in this table is a separate task, and each column is a task property. You can modify the task properties and can set up new tasks by inserting row into the table. PH_RUN, PH_ALERT, PH_THRESHOLD, PH_GROUP are some of the other tables related to the task scheduling. To read more about these tables, see Resources.

The Task Scheduler in OAT has been categorized into three parts -- the scheduler, the task details and the task runtimes -- as Figure 15 shows.

Figure 15. Task Scheduler
Task Scheduler Tab
  • Scheduler

The scheduler in the Dynamic Server contains the tasks shown in the following table. You can modify these tasks and set up new tasks using the Task Scheduler Wizard provided by the Task Scheduler. Table 3 shows the parameters needed to modify and set up new tasks.

Table 3. Parameters in Task Scheduler
Parameter Description
mon_command_historyPurges the command history table
mon_configSaves any changes done in the ONCONFIG file
mon_config_startupSaves the ONCONFIG file on every server startup
mon_sysenvTracks the database servers startup environment
mon_profileSaves server profile information
mon_vpsCollects virtual processor information
mon_checkpointSaves information about checkpoints
mon_memory_systemMonitor server memory consumption
mon_table_profileSaves table profile information, including the total number of updates, inserts, and deletes that occurred on this table
mon_table_namesSaves the table names along with their creation time
mon_usersSaves profile information about each user
check_backupChecks to ensure that backups have run
ifx_ha_monitor_log_reply_taskMonitors the HA secondary log replay position
Alert CleanupRemoves all old alert entries from the system
post_alarm_messageSystem function to post alerts
Auto Update Statistics EvaluationEvaluates which columns and tables should have the statistics and distributions refreshed
Auto Update Statistics RefreshRefreshes the statistics and distributions which were recommended by the evaluator
Figure 16. Scheduler screen
Scheduler Screen

Click on the Add New Task button to define a new task using the Task Scheduler wizard.

Figure 17. Task Scheduler Wizard
Task Scheduler Wizard

There are two types of scheduler tasks, Task and Sensor. As explained earlier, a task provides the means for running a specific job at a specific time or interval. A sensor can be defined to collect and save information. If Run Only at Server Startup is selected, then the task is executed at the server startup time only.

Provide the task name, task group (for example, DISK, NETWORK, CPU, BACKUP, and so on), and description for the new task. Click Next. Figure 18 shows these steps.

Figure 18. New task setup
New Task Setup Wizard

Now you are asked to specify the schedule for the new task as Figure 19 indicates. Enter a start and stop time as well as the frequency of the task. Click Next.

Figure 19. Schedule for the new task
Schedule for new task

In order to specify the command that the new task should run, enter multiple SQL statements separated by semicolons or a user-defined function here which is executed at every task run time.

Figure 20. Command to be run by new task
Command to be run by new task

The last screen of the Task Scheduler Wizard shows the summary of the new task, requesting confirmation of values. Once you confirm this information, a new task is created.

Creating a sensor using the Task Scheduler Wizard:

As mentioned earlier, sensors can be used by DBAs to accumulate server data in order to maintain a history of activities on the server. The Task Scheduler Wizard can be used to create sensors. You will be prompted to enter the sensor name, sensor group, description for the new sensor, start time, stop time and frequency, similar to that of a new task.

Notice that, to create a sensor, there is an additional input field --- data delete -- with spaces for days, hours and minutes. Whenever the sensor is executed, data that is gathered is stored in a user-defined table which is created at creation of the sensor. Data delete specifies the interval at which data from this table is deleted. Depending on the kind of data history that the DBA wants to maintain, this data delete interval should be set. You have to provide an SQL command which should be executed at the sensor execution time similar to the command user provides for a task. Once these steps are completed successfully, this new sensor is displayed in the table of tasks.

  • Task details

After you have entered all the information for your sensor, you can view the task details screen as in Figure 21.

Figure 21. Task details screen
Task Details Screen

The top-level Task Details screen displays all the tasks along with their groups, description, next execution times and frequency. At the top of the screen, the "Group to View" box allows you to choose of all the specific tasks. For example, if you select CPU as the group you want to view, then only mon_vps tasks are shown in the table. To know more about a particular task or to change the parameters set for a particular task, you should click on the name of the task that appears under the Name column. A new screen with that particular task's details can be seen.

Figure 22 shows a view of a particular task's detail screen:

Figure 22. Individual task details screen
Individual Task Details Screen
  • Task runtime

The task runtime provides the run summary for every task. The summary table provides information such as the number of executions that have happened, the average time taken for each execution , the total time taken for all the executions, and the last runtime for every task. Figure 23 shows a sample task runtime screen

Figure 23. Task runtime screen
Task run time Screen

You can look at further details of the task and can make changes to the task by clicking on the task name under the Name column. Task parameters is an extra section that you can see when you navigate to the individual task from the Task runtime. There is a sysadmin table called ph_threshold that is used to store thresholds for the scheduler tasks. When the threshold is met, the task can decide to take a different action. The task parameters are referring to these thresholds. The thresholds are essentially parameters from the task. Figure 24 shows an example of 'check_backup' system defined task with two parameters. The task parameters section only shows up in OAT if there are entries in the ph_threshold table for that particular task.

Figure 24. Task Parameters
Task Parameters

Space administration

The Space Administration category is further classified into three sub-categories, namely:

  • Dbspaces
  • Chunks
  • Recovery logs

Dbspaces

Before introducing you to the dbspaces and its use, first we want to give you an introduction to the types of the spaces that can be created using OAT.

The types of spaces available are:

  • Dbspace: A dbspace is a logical unit that can contain between 1 and 32,766 chunks. Place databases, tables, logical-log files, and the physical log in dbspaces. A dbspace includes one or more chunks.
  • Temp dbspace: A temporary dbspace is a dbspace reserved exclusively for the storage of temporary tables. The database server clears any tables that might remain since the last time that the database server shut down.
  • Blobspace: A blobspace is a logical storage unit composed of one or more chunks that store only TEXT and BYTE data. You can store TEXT and BYTE columns associated with distinct tables (see Tables) in the same blobspace.
  • Smart blobspace (sbspace): A sbspace is a logical storage unit composed of one or more chunks that store smart large objects. Smart large objects consist of character large object (CLOB) and binary large object (BLOB) data types. User-defined data types can also use sbspaces.

DATA Space/BLOB Space - Represented by Graph and Statistics:

Figure 25 depicts both graphical and statistical representations of the space utilization by different types of dbspaces -- DATA space, TEMP space and BLOB space. The upper half represents the graph while the bottom half represents the same in terms of Data.

Figure 25. Graphical and statistical representation of DATA - BLOB spaces
Graphical and Statistical representation of DATA - BLOB spaces

The following dbspaces table highlights the following: the name of the space, type of space (dbspace or blobspace), their status, total size, amount of space that's available, percentage of space consumed, number of chunks through which the space is spread across, and the size of each individual page.

Figure 26. Dbspaces contents table
DBSpaces Contents Table

In your environment, if there is a large number of existing dbspaces, then, in order to take a look at their details, you can make use of the drop-down menu available at the top of the table (below the graph on the OAT console), by choosing the page number. The option ALL at the right-hand top corner of the table (again, below the graph on the OAT console) dumps all the existing dbspaces onto the given page and makes it possible to view everything in one single page.

Create a space

Beneath the dbspaces table, you can create new spaces using the option provided under the section Create a Space. Figure 26 is a snap shot depicting the options to create a space.

Figure 27. Create a space
Create a Space

All the you have to do is provide the following parameters, and then click Create.

Table 4. Parameters for creating a new space
Parameter Description
NameThe name of the space that has to be created
PathThe path location where you decide the new space will reside
OffsetOffset to the space, if any, needs to be mentioned here
SizeThe size of the space you want, based on both requirement and availability.
Typethe type of space you want to create --- dbspace, temp dbspace, blobspace and smart blobspace

If the space is successfully created, then the following message is displayed above the dbspace graph.

Figure 28. Dbspace successfully created
DBSpace successfully created

Note: The name of the space to be created should be appended to the path that is provided in the text field of the path. For instance, if the name is 'myspace', then the path should look like '/usr2/IDS1150/data/myspace'.

If the path is not specified correctly, then you would see an error message similar to the one in Figure 29 being displayed on top of the dbspace graph.

Figure 29. Error message while creating dbspace
Error message while creating DBSpace

In order to know how the space utilization has been distributed within the dbspace and, you can click any one of the dbspaces listed in the dbspaces table. This action will further display four more tabs: Summary, Admin, Tables and Extents.

  • Summary

The Summary tab provides information regarding the selected dbspace in terms of both statistics and graph. The dbspace info is provided as a unit of pages. You will find the statistical information within a table 'Dbspace Info,' and on the right side, notice the pie chart displaying the way the dbspace has been utilized.

The dbspace info covers the information such as the name of the dbspace, its owner, the dbspace creation time, the page size, number of pages covered by data, index and others, and the number of free pages available for further utilization.

  • Admin

The Admin tab allows you to perform operations such as dropping the entire dbspace, adding the space to the existing dbspace, and performing integrity checks for the dbspace table format and dbspace extents.

For the parameters of the Chunks table, please see Table 6.

Drop space: Here, you can drop the entire space if you wish. To do this, select YES from the drop-down menu, and click on Drop. This option is disabled for the 'rootdbs' dbspace.

Add space: Here, you have the option to add space to the existing dbspace. This can be done by providing relevant values to the following variables:

Table 5. Parameters to add space to the existing dbspace
Parameter Description
PathThe path location where the new space will reside
OffsetOffset to the space, if any, needs to be mentioned here
SizeThe size of the space you want, based on both requirement and availability
File CreationYou can select one of two modes ---"Create file if none exists" or "File must exist"

Integrity of: You can check the integrity of the space by performing a check on the 'Dbspace Table Format' and 'Dbspace Extents'.

  • Tables

The Tables tab lists all the tables within the dbspace. This dbspace table listing provides various information including the table name, database name with which the table is associated, the DB_LOCALE value for the respective database and table, the number of rows in the table, the date and time when the table was created, the number of pages allocated to the table, the number of pages utilized and the number of extents.

  • Extents

The Extents tab lists all the extents associated with the given dbspace. The dbspace extent listing includes the name of the table, the start and end addresses of the extent, and the sizes associated with them.

Chunks

A chunk is the largest unit of physical disk dedicated to database server data storage. Chunks provide administrators with a significantly large unit for allocating disk space. The maximum size of an individual chunk is 4 terabytes. The number of allowable chunks is 32,766.

If you select Space Admin > Chunks, you will see two tabs -- Chunk and Chunk IO ( as depicted in the following figure). A window with the statistics for the Chunk tab is displayed by default.

The following table is a part of the resultant output of the command " onstat -d "

Figure 30: Chunks Table
Chunks Table

The table in the Chunk's tab depicted above provides the information with respect to the configuration details of each individual chunk.

Table 6. Description of Chunk Table Listing
Parameter Description
Chunk numberLists the chunks sequentially (clicking on the column header will reverse the order of display)
Page sizeSpecifies the size of each page within the chunk
OffsetOffset to the space, if any, needs to be mentioned over here
SizeDisk space taken up by the chunk
FreeProvides statistics with respect to the space available for further use
Used %Percentage of the space utilized by the chunk
StatusStatus indicates whether the chunk is online or offline
PathPath points to the location where the chunk is residing

Chunks I/O

The Chunks I/O tab provides the I/O information performed over each individual chuck. The following table describes the information as provided by 'I/O by Chunk'

Table 7. Information provided by Chunk I/O
Parameter Description
Chunk numberLists the existing number of chunks
Chunk pathLocation where the chunk is residing
ReadsTotal number of reads from the chunk
WritesTotal number of writes to the chunk

Recovery logs

The recovery logs section is further classified into four sub-categories -- logical logs, physical logs, checkpoint, and admin.

  • Logical logs

The logical log contains a record of changes made to a database server instance. The logical-log records are used to roll back transactions, recover from system failures, and so on.

If you select Space Administration > Recovery Logs, the logical logs tab is active and on display by default. The upper half of the page now has the pie chart, breaking it up to illustrate the usage of logical log as used space, free space and backed up space. You can obtain the statistical information of the pie chart by selecting the Data button.

Figure 31. Logical log status
Logical Log Status

The second half provides the status information of the logical logs. The Number column indicates the number of logical logs sequentially. The second column gives the unique ID for each logical log. The Size column displays the size of each logical log, and the fourth column, Used, informs the user of the amount of space filled up on each logical log. The Location column indicates where each logical log resides.

The Last Filled column informs the user of the date and time when each logical log was used for the last time. The date is mentioned in the YYYY-MM-DD format, while the time is mentioned in the HH:MM:SS format. The Notes column informs the user of the state of the logical log. These states may include used, used and backed up, used and current or newly added. The last column, Fill Rate, indicates the rate at which the logical log was filled up. This Fill Rate is described either in terms of Bytes per second or kilobytes per second.

  • Physical logs

The physical log is a set of disk pages where the database server stores an unmodified copy of the page called a before-image.

On the Physical logs tab, you are presented with a table and a graph. The Physical Log Info table contains information specific to the physical log with reference to its size, amount of the size utilized, the location of the physical log, its start offset and the size the buffer.

On the other hand, the graph depicts the pie chart indicating the space available and the space used up by the physical log. The same information is shown numerically when you click on the Data tab provided at the top right corner of the graph.

  • Checkpoint

A checkpoint refers to a point when the database server synchronizes the pages on disk with the pages in the shared-memory buffers.

As part of the checkpoint procedure, the database server writes a checkpoint-complete message in the message log.

The following table obtained on the Chunks tab is the resultant output of the command " onstat -g ckp ".

Figure 32. Checkpoint's info
Checkpoint's Info

Description of the parameters on the Checkpoints table:

Table 8. Parameters of the checkpoints table
Parameter Description
IntervalCheckpoint interval ID
TypeSpecifies the type of checkpoint whose value may be one of the four checkpoints: block, hard, norm and unblock
LSNLogical log position where the checkpoint is recorded
TriggerEvent that triggered the checkpoint; events include Admin, Startup, CKPTINTVL, LongTX, Recovery, Backup, Plog, Llog, Misc, RTO, CDR, Pload, Conv/Rev, Reorg, HDR, User, and Lightscan
TimeClock time when the checkpoint occurred -- an asterisk (*) indicates that the checkpoint requested was a transaction-blocking checkpoint
Block timeIndividual transaction blocking time, in seconds, for that particular checkpoint
Crit timeTime spent waiting for the critical section to be released
Flush timeTime, in seconds, to flush bufferpools
Ckpt durationTime, in seconds, for all transactions to recognize a requested checkpoint
# Dirty BuffersNumber of dirty buffers flushed to disk during checkpoint
# WaitsAverage time, in seconds, transactions waited for checkpoint
  • Admin

    The Admin tab allows you to perform the following operations:

    • Do checkpoint
    • Add logical logs
    • Drop logical logs
    • Move physical logs

Do checkpoint

Here, you are provided with options to initiate two types of checkpoints -- normal checkpoint and sync checkpoint.

Add logical log

Select the Admin tab > Do checkpoint > Add logical log options. Table 9 includes the parameters you need to supply in order to add a logical log.

Table 9. Required parameters to add a logical log
Parameter Description
Dbspace nameChoose the dbspace out of the available options, on to where the user intends to add the log
SizeSpecify the size of the logical log in terms of KBs, based on the requirement
NumberTotal number of logs that need to be added; between 1 and 6
AttributesDecide whether to add the logical log after the current log or to append it right at the end of the logs

Drop logical log

Select the Admin tab > Do checkpoint > Drop logical log options. Table 10 includes the parameters you need to supply in order to drop a logical log.

Table 10. Required parameters to drop a logical log
Parameter Description
Logical log numberThe drop-down menu lists the existing number of logical logs along with their location. Choose the log you wish to drop (only one log at a time)
ConfirmChoose Drop logical log to confirm the dropping of the logical log. if you are uncertain about dropping the logical log, then select Do not drop logical log.

Move physical log

Select the Admin tab > Do checkpoint > Move physical log options. Table 11 includes the parameters you need to supply in order to move a physical log.

Table 11. Required parameters to move a physical log
Parameter Description
Dbspace nameChoose the dbspace out of the available options, on to where the user intends to move the log
SizeSpecify the size of the physical log in terms of KBs, based on the requirement
ConfirmChoose Move physical log to confirm the movement of the physical log. Or, if you are uncertain about moving the physical log, then select Do not move the physical log
  • Recovery policies

The Recovery Policies tab allows you to configure the Recovery Time Objective, turn on or off the Auto Checkpoints and Auto LRU Training, and save the settings. The Recovery Time Objective can be turned off, or its value can be tuned anywhere between 30 seconds through to 30 minutes. For the Auto Checkpoint and Auto LRU Tuning, you can only switch them on or off.

Server Administration

This facility helps you view and take a necessary step on a database server related to the server administration. There are multiple sub-options available to enrich the server administration part.

  • MACH

This is one of the key features of the IDS 11.10 and IDS 11.50 releases. This facility helps you maintain the high availability data replication (HDR), remote standalone secondary server (RSS) and shared disk secondary server (SDS) in the system. A future article will be published shortly on this feature as it needs more extensive explanation.

  • Configuration

This shows the configuration parameters declared in the ONCONFIG file. Every row indicates the parameter name, its current value and whether the parameter is dynamically configurable or not ( see Figure 33). Click on any of the parameters to obtain a new screen, which contains the name of the parameter, its description and current value, any recommendation on the value if the system detected any deviation or further improvement, and also the information regarding its dynamism (see Figure 34). All the rows distinguished with the color yellow have recommendations. You can also set options to filter out the recommended and dynamic parameters by using the List box located at the top of the screen.

Onconfig File

Figure 33. Parameters of Onconfig file
Server Administration Configuration

Onconfig Parameter Details:

Figure 34. Details of individual Parameter
Server Administration Configuration_1
  • System validation

This feature checks the consistency and the corruption on data and extents, which is currently done through the oncheck utility. You have the flexibility to check the data format on database level as well as to narrow down the filter on the table level also. Extents can be checked and verified for each dbspace.

Click on Server Administration > System Validation to see the Server Administration Check Validation screens. Figure 35 indicates how to enact a consistency and corruption check on data and extents, narrowing down the filter to the 'table' level.

Figure 35. Server Administration Check Validation-1
Server Administration Check Validation

Figure 36 shows the extent being checked and verified for each dbspace:

Figure 36. Server Administration Check Validation-2
Server Administration Check Validation
  • User privileges

This option gives you the flexibility to declare user privileges at different levels on the Data Server. Based upon the database selected from the list, you can declare privileges on the database level, table level and also on roles.

On the database level, a user can modify, as well as create, new privileges after choosing the proper options as shown in Figure 37.

To manage privileges at the database level, select a database you wish to modify and select the Database-Level privileges option. Fill in or modify the information for the user, privilege and default role.

Figure 37. Manage privileges - database level
Server Administration User Privileges

On the table level, you can see the current privileges that have been assigned to each user at each table. To manage privileges at the database level, select a database you wish to modify and select the Table-Level privileges option. You are given option to change the current privileges for each user and on each table as well as to declare new privileges for each user on different tables. The specialty on this screen is menu-based and easily manageable.

Figure 38 shows an example of how to manage privileges at table-level

Figure 38. Manage privileges - table level
Server Administration User Privileges

In the same Manage Privileges screen, select Roles and fill in the specified information needed to create a new role. Figure 39 shows this step.

Figure 39. Server administration - creating roles
Server Administration User Privilegess.
  • Virtual processors

The virtual processors option allows you to see the current status and usage of the different virtual processors running the database servers. You have the option to view the data graphically or in a data format. Figure 40 shows a sample graph of the virtual processors.

Figure 40. Server administration processors
Server Administration Processors

You can also add a Virtual Processor when needed through this screen by selecting Add at the bottom of the screen.

Figure 41. Server administration virtual processors
Server Administration Virtual Processors
  • Auto Update Statistics

The auto Update Statistics feature is the subject of a later article in this series.

Performance analysis

The Performance Analysis menu option has four parts -- SQL Explorer, Performance History, System Reports and Session Explorer. This performance analysis feature lets user monitor various aspects of the IDS.

Figure 42. Performance analysis categories
Performance Analysis Category
  • SQL Explorer

    One of the ways to perform analysis of the overall system is to explore the SQL queries getting executed. This can be facilitated using the query drill down feature. Query drill down allows you to gather statistical information about each SQL statement executed on the system and analyze statement history. The query drill down feature helps determine the length of an SQL statement, the resources each statement uses, the length of the statement execution, the length of the resources wait time and so on. This feature is turned off by default, but can be turned on for all users or for a specific set of users.

    Configuration parameter SQLTRACE is used to control the default tracing behavior when the database server starts. When this feature is enabled with its default configuration, the database server tracks the last 1000 SQL statements that ran, along with the profile statistics for those statements. Any user who can modify the $INFORMIXDIR/etc/$ONCONFIG file can modify the value of the SQLTRACE configuration parameter and effect the startup configuration. However, only an Informix user or any DBA who has been granted connect privileges to the sysadmin database can use the Administration API commands to modify the run time status of the SQL trace feature. The information set to control tracing behavior includes:

    • Level -- this can be low, medium or high. Low-level tracing captures statement statistics, statement text and statement iterators. Medium-level tracing captures low-level tracing information, plus table names, the database name and stored procedure stacks. High-level tracing captures all the information included in the medium-level tracing, plus host variables.
    • The number of SQL statements to trace (ntrace)
    • The number of kilobytes for the size of the trace buffer (size)
    • Mode field specifies the scope of tracing, that is, global for all users on the system or User for users who have tracing enabled by the Administration API task() function

The memory required by this feature is large if a lot of historical information is to be stored. The default amount of space required for SQL history tracing is two megabytes.

The SQL tracing admin tab allows you to change the tracing parameters using the OAT as shown in Figure 43. Select Modify to change any of the parameters.

Figure 43. SQL Explorer, SQL tracing admin tab screen
SQL Explorer, SQL Tracing Admin Tab Screen

The Statement Type tab in Figure 44 allows you to perform the query drill down by statement type. The statements that have been executed in the system are grouped by type along with summary statistics for each SQL statement group. The statements types include SELECT, INSERT, DELETE, UPDATE, CREATE, DROP, and so forth. A graphical representation can also be seen on the right hand side of the screen. Select the Data button to see the statements and their count in a tabular form.

Figure 44. Statement Type tab
SQL Explorer tab

You can select the type of SQL statement (for example, SELECT or DELETE, etc) to see statement-level details, where all identical statements are grouped together as Figure 45 illustrates. Choose SELECT to go to this SQL Type tab screen. tab screen.

Figure 45. SQL Explorer Screen
SQL Explorer tab Screen

From the SQL Type tab screen, you can pick an SQL of interest and click on Drill Down to come to the SQL List screen, shown in Figure 46. This screen displays the detailed statistics about each invocation of this SQL statement. Statistics provided include userid ID of the user who ran the command, the database session ID, the response time of the query, and so on.

Figure 46. SQL List screen
SQL List Screen

From the SQL List screen, you can choose a session of interest and click on Drill Down to get a detailed profile about this particular SQL statement. The SQL Profile tab screen (Figure 47) provides statistics related to the scans performed, number of buffer reads and writes, number of pages reads and writes, number of lock requests and waits, estimated number of rows, optimizer estimated cost, number of rows returned , and so on.

Figure 47. SQL Profile Tab
SQL Profile Tab

Similar to the option to drill down by SQL statement, another way of drilling down to get the performance statistics is to drill down by transaction time. Transaction time is one of the tabs of the SQL Explorer. This tab displays a list of transaction times for the most recent 100 transactions. Information provided includes session IDs and corresponding number of SQL statements, the average response time, maximum response time, average memory used and the number of rows processes in this transaction. Figure 48 shows this Transaction Time Tab.

Figure 48. Transaction Time tab
Transaction Time tab

Click the Drill Down button to explore individual transactions. A list of SQL statements which have been run for the transaction is displayed. Along with this, transaction statistics summary is also displayed. This includes information such as response time, an SQL statement's count of that transaction, cost estimation, sorting statistics, locking related details, and so on. You can use the Drill Down button to go to the next level of exploration, that is, the statement level exploration. This shows the complete SQL profile for individual statements as explained under the SQL Profile Tab Screen.

Figure 49. Transaction screen
Transaction Tab Screen

Another option to explore SQL statements is by their frequency. The Frequency tab shows the list of SQL statements with their frequency of execution along with the average response time, lock wait, and wait IO time. Use the Drill Down buttons on this screen to drill down further and explore the individual statements.

Figure 50. Frequency tab screen
Frequency Tab Screen

Performance history

Using the Performance History menu option from the Performance Analysis, gives you a graphical view of the various aspects of the database-related operations which contribute to good and bad performance. It also provides data in the tabular form for users to draw conclusions about the performance of the server and to take corrective actions if needed. Some of the areas covered under this include auto checkpoints, disk reads, disk writes, disk flushes, chunk writes, foreground writes, buffer reads, buffer writes, buffer waits, general profile (including deletes, commits, locks, deadlocks, sorts, scans, and the like), and btree scanner.

The time range that is covered by these graphs is whatever is available in the sysadmin:mon_profile table. This mon_profile table is populated from the dbcron task 'mon_profile', so it depends on how often that is scheduled to run. By default, it is once every four hours. The data is also auto-deleted based on the settings of the task mon_profile. The default is seven days, so any data older than seven days is purged from the table and is not available to be seen in these graphs. The 2k, 4k and so on indicate the page sizes that are available for dbspace sizes (therefore chunk sizes). For example, if a dbspace is created with an 8k page size, then the user will have chunks within that dbspace that have 8k pages which ultimately means that user will get 8k page reads/writes and 8k buf reads/writes.

System reports

The System Reports menu option under Performance Analysis helps user create various reports related to different areas of the server , for example, disk usage, memory pool, table actions, performance, sessions, and so on. These reports consolidate the information for the user to take necessary actions (for example, in case of shortage of disk space, adding new chunks or monitor table locking, memory usage, virtual processors, and so on to check the scope of performance improvement, monitoring logical logs to ensure logs availability, and the like).

Reports are categorized as Disk, Performance, Network, SQL Tables and Users as shown in Figure 51 Every category includes a few reports, for example if Disk category is chosen from the list box, then Disk Space Usage, Online Log, Logical Logs, View Checkpoints, Disk IO Levels, Server Admin Commands, Physical Log and System Backups reports are chosen automatically. Use the Create Report button to generate a consolidated report for all of these.

Figure 51. System Report screen
System Report Screen

Figure 52 shows a report generated to get more information about logical logs. Information provided in this report like total number of logs, their status (backed up, used, available), fill rate, and so on helps users make a decision on the need to add new logs or just do general log maintenance.

Figure 52. Logical Logs Report screen
Logical Logs Report Screen

Session Explorer

As the name suggests, the Session Explorer provides detailed information of all the current user sessions running at the server. Figure 53 shows a typical Session Explorer screen.

Figure 53. Session Explorer screen
Session Explorer Screen

Click on the session id under the SID heading to see the details of the individual session.

The SQL tab of the Session Explorer shows SQL statements with their IDs and Type (INSERT, DELETE, SET ISOLATION, SET LOCK MODE, and so on). You can see these SQL statements only if SQL tracing is enabled using the configuration parameter SQLTRACE.

Figure 54. Session Explorer SQL tab
Session Explorer SQL Tab

The Locks tab provides information about all the locks associated with a session. These locks could be table level, row level, or lock on index, and so on. For every lock , you can see details like the database name: table name/index name on which the lock is applied, the type of lock (shared, exclusive, update, byte, intent shared, intent exclusive, intent shared exclusive, etc), the duration for which that lock is held, the id of the locked row, waiters for the lock if there are any. Index # 1 indicates that the lock is on an index. Key Item Locked is the locked index entry.

Figure 55. Session Explorer Locks tab
Session Explorer Locks Tab

The Threads tab of the Session Explorer provides information pertaining to the threads belonging to the selected session. There is one primary thread associated with a user session, and there could be many more depending on how client's requests are being processed. From the Threads tab, user can know the current threads, their IDs, their priorities, the number of times each thread has run on a virtual processor (NUM_SCHEDULED), the amount of time that each thread has spent running on a virtual processor(TOTAL_TIME), the time slice that each thread got for its run (TIME_SLICE), the virtual processor id the thread ran on last (VPID) and the wait reason (WAIT_REASON) for every thread, and IO wait.

Figure 56. Session Explorer Threads tab
Session Explorer Threads Tab

The Memory tab helps you get a consolidated view of memory usage and availability for a session. The Name column lists the name of the memory pool. Just a number in the NAME column is the session id since memory pools for a session are tracked by session id. USED and FREE columns list the total amount of memory from the used list and total amount of memory from the free list. These two columns are formatted using the format_units function.

Figure 57. Session Explorer Memory tab
Session Explorer Memory Tab

The Network tab provides a detailed picture of the network usage of a session. The Client protocol name is displayed as the THREAD_NAME. Along with this, the start time of the session (SESSION_START), duration of this session (CONNECT_DURATION), time of the last read from the network ( LAST_READ) and time of the last write to the network(LAST_WRITE) can be seen. Network usage in terms of data sent and received can be obtained from the columns AVERAGE_RECV, AVERAGE_SEND, RECEIVED_DATA, SEND_DATA. RECEIVED_DATA is the number of bytes transferred to the server and SEND_DATA is the number of bytes transferred to the client. The number of network reads and number of network writes are used to calculate the average values provided.

Figure 58. Session Explorer Network tab
Session Explorer Network Tab

A complete picture of a server and session environment can be obtained by using the Environment tab. The Environment tab lists the startup environment setting of the server and also the environment variables at the session that is, the client level.

Figure 59. Session Explorer Environment tab
Session Explorer Environment Tab

The Profile tab of the Session Explorer provides the profile counts for the session being looked at. The Locks profile count includes locks which is the number of locks currently held, Lock Requests is the number of requested locks, Lock Waits is the number of lock waits, Dead Locks is the number of dead locks detected, Lock Time Outs is the number of timed out lock requests. Logs related statistics include Log Records which is the number of log records created by the session, Log Space is the log space currently used and Max Log Space is the maximum log space ever used.

Figure 60. Session Explorer Profile tab
Session Explorer Profile Tab

Values like rows processed in terms of isam reads (Rows Processed), rows inserted (Rows Inserted) in terms of isam writes, rows updated (Rows Updated), rows Deleted provide record level statistics. Commits, Rollbacks, LongTXs provide transaction related counts like the number of commits, number of rollbacks and number of long transactions. Sequential Scans is the number of sequential scans performed. Sorting statistics comes from Sorts which is the total number of sort operations performed, Disk Sorts which is the number of sorts performed on the disk and the Memory Sorts is the number of sort operations performed in the memory which is calculated as the difference between the total sorts and the disk sorts. Largest Sort provides the maximum sort space used on disk. Statistics on buffer can be obtained from the two fields Buffer Reads and Buffer Writes which is the number of reads and writes done from and to the buffers respectively.

Foreground (FG) reads and FG writes are the general reads and writes done during the session. Thus, the Profile tab of a session provides useful information about the session from various aspects.

SQL Toolbox

The SQL Toolbox category is mainly classified into three sub-categories, namely:

  • Databases
  • Schema browser
  • SQL editor

Databases

The Databases link opens the Databases tab, displaying a pie chart that shows the percentage of space occupied by each database, as well the list of the databases. When you take the mouse pointer on top of the pie chart pointing to each database portion, you can see the percentage value that particular space has covered within the pie chart. Clicking on the Data tab on the top right corner of the graph provides the statistics of each individual of the pie chart.

Figure 61. Information on various databases
Info on various Databases

Note the Databases table below the graph that lists all the databases on the given IDS server under the Names column. The Collation column has the value for DB_LOCALE for the respective databases. The Create Date column specifies the date when the database was created, with YYYY-MM-DD being the format. The Logging column mentions the logging mode that has been set while creating the database (buffered, unbuffered, ANSI compliant, and not logged).

Schema browser

To study the contents of each database in depth, click on the respective database as listed in the Databases table. This action activates the sub-category 'Schema Browser' and displays the contents of the Tables tab by default. The schema browser has Tables as its first tab, which is followed by the tab SPL/UDR.

The Tables tab lists all the tables contained within the chosen database. At any given point in time, the user can choose a database or the IDS server, using the option provided at the top right corner of the screen. The drop-down menu for Server lists the servers defined for the particular group chosen, while the drop-down menu for Database lists the databases within that particular IDS server.

Do you need the database catalog tables to be listed along with other tables in the list ? This choice is up to you. To list the database catalog tables with the other tables in the list, check the Include Database Catalog Tables box and then select Submit. If you do this step, you should see the database catalog tables listed; if not, the listing will be independent of the database catalog tables.

A snapshot of the table on the Tables tab is shown in Figure 62:

Figure 62. Tables Info on Schema Browser
Table on the tables tab of schema browser tab

Description of the parameters used to move a physical log

Table 12. Tasks in the scheduler
Parameter Description
BrowseAn icon that opens up a new tab 'Table Browse'
NameName of the table, view or synonym
Create dateDate of the creation of the table
Table idSystem-assigned sequential identifying number
PartnumPhysical location code
RowsizeRow size
NRowsNumber of rows in the table
NIndexesNumber of indexes on the table
LocklevelLock mode for the table:
B = Page
P = Page
R = Row
FextsizeSize of initial extent (in kilobytes)
NextsizeSize of all subsequent extents (in kilobytes)
Pages UsedNumber of pages utilized
View

From the parameters listed in Table 12, the columns Browse, Name and Partnum provide further information apart from what has been mentioned in the table. By clicking the icon on the Browse column for a given table name, 'Table Browse' tab opens by the side of the SPL/UDR tab. This tab lists the output of the query ' select * from tabname;'. where the tabname is the name of the respective table with which the icon is associated.

Choosing a table listed under the column Name opens the Column Info tab which describes the properties of its columns. The description includes on the name of the column, column type, column length and its extended type. The table as displayed by the tab Column Info will be as shown below.

Figure 63. Column Info for a given table
Column Info for a give table

Selecting certain values of the Partnum column causes a window to pop up that displays the partition information for the respective value. The following image shows a snap shot of the pop up screen displaying the partition info.

Figure 64. Partition information
Partition Info

SQL Editor

You are provided with an SQL Editor window, where in you can type or paste down the SQL statement and execute it. In order to execute, you need to click Run Query. You can save in the SQL Editor by clicking Save query to file. When you choose to save the query to a file, the OAT automatically generates a filename, and this filename is appended with a number which increases by one every time the user chooses to save a query to the file.

Figure 65. SQL Editor
SQL Editor

In Figure 65, notice that there are two SQL statements typed in the SQL Editor window, separated by a semicolon. When you execute the above queries using the 'Run Query' option, the resultant output will be displayed in a separate Results tab and should be something similar to the image in Figure 66.

Figure 66. SQL results
SQL Editor Results tab

Also, along with the result being displayed on the Results tab, you should see a notification similar to the one in Figure 67.

Figure 67. SQL Editor import limitation
SQL Editor Import limitation

This message indicates that only the query that appears before the first semicolon will be executed and all the subsequent lines after the semicolon will be removed.

Limiting the resultant output: The resultant output of the query being executing can be limited using the following options provided in the SQL Editor.

Table 13. Limiting the resultant output of the query
Parameter Description
Text/Clob column optionChoose the nature of the output that has Text/Clob; select one of the following to limit query output:

Show All
Show 255 Chars
Show in File
Show Size only
Ignore Column
Byte/Blob column optionChoose the nature of the output that has Byte/Blob; select one of the following to limit query output:

Ignore Column
Show in File
Show Size Only
Show As Image
Number of rowsFor queries with order by, group by, sub queries, distinct, unions, and aggregates, the number of rows to fetch can be limited by providing a value into the text box. The user can also reset the value using the option 'Reset'.

Import Query: You are also provided with an option to import queries if they already exist in a file. To do this, you need to click on Browse, locate the file and then click Import. The resultant query will appear in the SQL Query Editor window. You should be aware of the information that, similar to the condition mentioned while explaining Run Query, during import, only the query that appears before the first semicolon will be projected on to the SQL Query Editor and all the subsequent lines after the semicolon will be removed. Now you can proceed with the execution of the imported query, whose results will be displayed in the Results tab.

Summary

The OpenAdmin Tool is an effective, useful front end tool for working with IDS databases. In this article, we introduced many of the OAT components and described how to work with them. After reading the article, you should be able to establish connections for various IDS instances with the tool, manage connection groups, handle basic operations, perform various administrative functions, generate relevant reports, display adequate space information as and when required, and execute queries. In subsequent articles in this series, you'll learn how to migrate from the Informix Server Administrator to OAT, and how to analyze your server performance using OAT.

Resources

Learn

Get products and technologies

Discuss

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into Information management on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Information Management, Open source
ArticleID=319753
ArticleTitle=Manage your Informix database with the IDS OpenAdmin Tool, Part 1: Configuring and using the OpenAdmin Tool with IDS
publish-date=07102008