System Administration Certification exam 919 for Informix 11.70 prep, Part 3: System activity monitoring

In this tutorial, you'll learn about IBM Informix® database tools, the utilities to monitor the database, and how to diagnose problems. Learn how to use the system-monitoring interface (SMI) and the SQL administration API. This tutorial prepares you for Part 3 of the System Administration Certification exam 919 for Informix v11.70.

Share:

Mu Yang (muyang@cn.ibm.com), Advisory Software Engineer, I.B.M.

Photo of YangmuYangmu has worked in the Informix area for more than 10 years. He is experienced in database design, implementation, maintenance, and performance tuning with PMP and ITIL certifications. Yangmu has successfully helped many customers with upgrade, maintenance, and performance optimization projects.



Hedwig Fuchs (hedwig.fuchs@de.ibm.com), Advanced Support Engineer, I.B.M.

Photo of Hedwig FuchsHedwig Fuchs has worked within Informix Technical Support for the last 14 years. Before joining Informix she worked as a consultant and programmer. Within IBM she has specialized in working with the database engines. Her work responsibilities include analyzing problems, including reproductions and code analysis.



28 December 2011

Also available in Chinese Portuguese

Before you start

About this series

Thinking about seeking certification on System Administration for Informix version 11.70 (Exam 919)? If so, you've landed in the right spot to get started. This series of Informix certification preparation tutorials covers all the topics you'll need to understand before you read that first exam question. Even if you're not seeking certification right away, this set of tutorials is a great place to start learning what's new in Informix 11.70.

About this tutorial

In this tutorial, you'll learn about how to use the set of Informix monitoring tools to monitor and to diagnose problems. The material provided here primarily covers the objectives in Section 2 of the exam, entitled "System Activity Monitoring." Topics covered in this tutorial include:

  • Monitor and isolate problems using system databases
  • Message log
  • View information about the database server with Informix Utilities
  • View information about the database server using the Scheduler and the SQL Administration API
  • Concurrency and locking
  • Operating system tools

Objectives

After completing this tutorial, you should know:

  • How to use the sysmaster database
  • What to find in the message log
  • How to use the onstat and oncheck utilities
  • How to use the dbschema utility
  • How to use the scheduler
  • How to use the SQL Administration API
  • The different types of locks
  • How to use operating system tools to monitor

Prerequisites

To understand the material presented in this tutorial you should be familiar with:

  • The Informix environment, including the configuration file, parameters, installation, and administration
  • Informix concepts and terminology, including dbspaces, chunks, physical log, logical logs, and checkpoint

System requirements

You do not need a copy of Informix to complete this tutorial. However, you will get more out of the tutorial if you download the free trial version of Informix Innovator-C Edition to work along with this tutorial.


Monitoring and isolating problems using system databases

The database server creates and maintains the sysmaster database. It is analogous to the system catalog for databases, which is described in the IBM Informix Guide to SQL: Reference. Just as a system catalog for every database managed by the database server keeps track of objects and privileges in the database, a sysmaster database for every database server keeps track of information about the database server.

The sysmaster database contains the system-monitoring interface (SMI) tables. The SMI tables provide information about the state of the database server. You can query these tables to identify processing bottlenecks, determine resource usage, track session or database server activity, and so on. This chapter describes these SMI tables, which are slightly different than ordinary tables.

There are many tables in the sysmaster database. Check the manual for details. Table 1 shows a list of the new tables that are introduced in Informix version 11.

Table 1. New sysmaster tables
Table nameDescription
sysbaract_logThe OnBar activity log file
syscheckpointCheckpoint information
sysclusterHigh-availability cluster information
syscmsmslaConnection Manager information
syscmsmtabConnection Manager information
syscompdicts_fullCompression dictionary information
sysdualA single-row table
sysenvOnline server's startup environment
sysenvsesSession-level environment variable
sysha_lagtimeSecondary-server lagtime statistics
sysha_typeInformation about connected servers
sysha_workload Secondary-server workload statistics
sysipl Index-page logging information
sysmgminfoMemory Grant Manager/Parallel Data Query information
sysnetclienttypeClient-type network activity
sysnetglobalGlobal network information
sysnetworkioNetwork input/output
sysonlinelogOnline log information
sysprofileSystem-profile information
sysrsslogRS secondary-server information
sysrstcbImprovement to view input/output and lock wait information
sysscblstImprovement to view the memory used by session
syssqlcacheprofProfile information about each SQL cache
syssqltraceSQL statement information
syssqltrace_infoSQL profile trace system information
syssqltrace_iterSQL statement iterators
syssrcrssRS secondary-server statistics
syssrcsdsSD secondary-server statistics
systcblstExisting table modifications to add wait stats
systrgrssRS secondary-server statistics
systrgsdsSD secondary-server statistics

Understanding the SMI tables

Important usage note

The database server relies on information in the sysmaster database. Do not change any of the tables in sysmaster or any of the data within the tables. Such changes could cause unpredictable and debilitating results. The database server creates the sysmaster database when it initializes disk space. The database server creates the database with unbuffered logging. You cannot drop the database or any of the tables in it, and you cannot turn logging off.

As user informix on UNIX or a member of the Informix-Admin group on Windows, you can create SPL routines in the sysmaster database. (You can also create triggers on tables within sysmaster, but the database server never executes those triggers.)

Note that joins of multiple tables in sysmaster might return inconsistent results, because the database server does not lock the tables during a join. You can join sysmaster tables with tables in other databases. However, to join sysmaster tables with tables in a nonlogging database, first make the nonlogging database the current database.

To rebuild the sysmaster database, you should contact technical support to lead you through the $INFORMIXDIR/etc/buildsmi utility.

The SMI consists of tables and pseudo-tables that the database server maintains automatically. While the SMI tables appear to the user as tables, they are not recorded on disk as normal tables. Instead, the database server constructs the tables in memory, on demand, based on information in shared memory at that instant. When you query an SMI table, the database server reads information from the shared-memory structures. Because the database server continually updates the data in shared memory, the information that SMI provides lets you examine the current state of your database server.

The SMI tables provide information about the following topics:

  • Auditing
  • Checkpoints
  • Chunk input/output
  • Chunks
  • Database-logging status
  • Dbspaces
  • Disk usage
  • Environment variables
  • Extents
  • Locks
  • Networks
  • SQL statement cache statistics
  • SQL statements
  • System profiling
  • Tables
  • User profiling
  • Virtual-processor CPU usage

Accessing SMI tables

You can use SELECT statements on SMI tables wherever you can use SELECT against ordinary tables (for example, from dbaccess, in an SPL routine, with Informix ESQL/C, and so on) with one restriction: you cannot (meaningfully) reference row ID when you query SMI tables. SELECT statements that use row ID do not return an error, but the results are unpredictable. All standard SQL syntax, including joins between tables, sorting of output, and so on, works with SMI tables. For example, if you want to join an SMI table with a non-SMI table, name the SMI table with the following standard syntax: sysmaster[@dbservername]:[owner.]tablename

Any user can use SQL SELECT statements to query an SMI table, but standard users cannot run statements other than the SELECT statement. Users who attempt to run other statements receive permission errors. The administrator can run SQL statements other than SELECT, but the results of such statements are unpredictable.

Tip: For more predictable results, query the views that are associated with each table instead of querying the tables directly.

Dynamic Server includes the sysadtinfo and sysaudit tables. Only the user informix on UNIX or members of the Informix-Admin group on Windows can query the sysadtinfo and sysaudit tables.

You cannot use the dbschema or dbexport utilities on any of the tables in the sysmaster database. If you do, the database server generates the following error message: Database has pseudo tables - can't build schema

Triggers and event alarms

You can create a trigger on an SMI table, but it never gets executed. Triggers are activated only when an INSERT, UPDATE, or DELETE statement occurs on a table. The updates to the SMI data occur within the database server, without the use of SQL, so a trigger on an SMI table is never activated.

To create an event alarm, you can query for a particular condition at predefined intervals and execute an SPL routine if the necessary conditions for the alarm are met.

SPL and SMI tables

You can access SMI tables from within an SPL routine. When you reference SMI tables, use the same syntax that you use to reference a standard table.

Locking and SMI tables

The information in the SMI tables changes based on the database server activity. However, the database server does not update the information using SQL statements. When you use SMI tables with an isolation level that locks objects, it prevents other users from accessing the object, but it does not prevent the data from changing. In this sense, all the SMI tables have a permanent Dirty Read isolation level.

Example of how to use the SMI tables

You might want a list of users who have a database open and a list of which workstations those users are using to connect to the database. onstat -u indicates which users are connected to the server, but not which databases and workstations they are using. onstat -g ses indicates the users and workstations, but not which databases. onstat -g sql indicates the session IDs and databases, but not the user names and workstations.

Listing 1 shows an example query.

Listing 1. Query SMI tables in sysmaster
SELECT sysdatabases.name database, syssessions.username,
       syssessions.hostname, syslocks.owner sid 
FROM syslocks, sysdatabases, outer syssessions 
WHERE syslocks.tabname = "sysdatabases" 
AND syslocks.rowidlk =sysdatabases.rowid  
AND  syslocks.owner  =  syssessions.sid;

Listing 2 shows the resulting output.

Listing 2. Result of querying SMI tables in sysmaster
database stores_demo 
username informix 
hostname istation1 
sid 27

database  sysmaster  
username  informix  
hostname  istation1  
sid  29

Example of how to use the syslocks tables

Look at the syslocks table in the sysmaster database and see how it can help troubleshoot locking problems. The syslocks table tracks locks and is analogous to the output from onstat -k. Listing 3 shows a sample line of onstat -k output.

Listing 3. Output from onstat -k
Locks
address          wtlist  owner            lklist  type   tblsnum  rowid    key#/bsiz
700000011aee2a8  0       7000000704d3d30  0         S    100002   204    0       
7000000157f72a8  0       7000000704d4568  0     HDR+S    100002   204    0       
70000001d2092a8  0       7000000704d34f8  0         S    100002   204    0       
 3 active, 2000000 total, 262144 hash buckets, 0 lock table overflows

There is a fair amount of data here, but it involves some work to get meaningful information.

Example of how to query the syslocks tables in sysmaster

Listing 4 shows the same locks by executing the command in sysmaster. Note that the where clause is omitted.

Listing 4. Querying the syslocks table in sysmaster
Select * from syslocks 

dbsname stores_demo 
tabname customer 
rowidlk 256
keynum 0 
type X 
owner 28 
waiter

Instead of a cryptic partnum (tblsnum), there is an actual database name and a table name. In real life this is more meaningful. Instead of the address for the owner given in the onstat -k output, the SELECT statement actually given a session ID. So, you can run an onstat-g ses, an onstat -g sql, and an onstat -g stm to find out what a user is doing that caused the locking.

Note is that the results of querying the sysmaster database are usually given in decimal format, whereas onstat output may be given in hexadecimal format. This can cause some confusion if you are trying to track down a problem using both modalities.

Example of common pseudo-tables

There are many pseudo-tables that are usually intuitively named to correspond with the resource they are tracking. Check the Table 2 for some common pseudo-tables and their corresponding onstat commands.

Table 2. Common pseudo-tables
Table nameCorresponding onstat command
sysconfigonstat -c
syschunksonstat -d (chunks part)
sysdbspacesonstat -d (dbspaces part)
syslogsonstat -l (logical logs part)
sysprofileonstat -p ?onstat -F
sysptntabonstat -t
sysusersonstat -u
syslocksonstat -k

The buildsmi script

When you bring the database server up for the first time, it runs a script called buildsmi, which is in the etc directory. This script builds the database and tables that support SMI. The database server requires approximately 1750 free pages of logical-log space to build the sysmaster database. If you receive an error message that directs you to run the buildsmi script, a problem probably occurred while the database server was building the SMI database, tables, and views.

When you use buildsmi, the existing sysmaster database is dropped and then re-created. This script must be run as user informix on UNIX, or as a member of the Informix-Admin group on Windows, after ensuring that no connections to the sysmaster database are made during the build of the database. For example, if a scheduler task is running when the buildsmi script begins, the script fails when the scheduler attempts to access any of the sysmaster tables. Errors issued while the buildsmi script runs are written on UNIX to the file /tmp/buildsmi.out, or on Windows to the file %INFORMIXDIR%\etc\buildsmi_out.%INFORMIXSERVER%, where %INFORMIXSERVER% is the name of the Informix database server.

The bldutil.sh script

The sysutils database is where OnBar stores information about every backup or restore it performs. Backup information about each dbspaces log and logical log is also stored here. OnBar uses these tables when performing warm restores.

When you initialize the database server for the first time, it runs a script called bldutil.sh on UNIX or bldutil.bat on Windows. This script builds the sysutils database. If it fails, the database server creates an output file in the tmp directory. The output file is bldutil.process_id on UNIX and bldutil.out on Windows. The messages in this output file reflect errors that occurred during the script execution.

The system-monitoring interface tables

The sysmaster database contains many tables that you can use to monitor your system. The database server supports the following SMI tables:

syscheckpoint
The syscheckpoint table provides information and statistics about checkpoints. The corresponding onstat is onstat -g ckp
syschunks
The syschunks table contains a description of each of the chunks that the database server manages. The corresponding onstat is onstat -d
syslocks
The syslocks table provides information about all the currently active locks in the database server. The corresponding onstat is onstat -k
syssqltrace
The syssqltrace table provides detailed information about the SQL statement. The corresponding onstat is onstat -g his
syslogs
The syslogs table provides information about space use in logical-log files. The corresponding onstat is onstat -l
syssessions
The syssessions table provides general information about each user connected to the database server. The corresponding onstat is onstat -g ses

Using the message log

Informix message log is an operating-system file. The database server writes status and error information to the message-log file.

Location

To specify the message log path name, set the MSGPATH configuration parameter. Changes to MSGPATH take effect after you shut down and restart the database server.

Monitoring the message log

You should monitor the message log frequently to ensure that the database server is running normally and that events are being logged as expected. Use the onstat -m command to obtain the name of the message log and the last 20 lines from the file. Use a text editor to read the entire message log.

Monitor the message log size, because the database server appends new entries to this file. Edit the log as needed, or back it up to tape and delete it.

If the database server experiences a failure, the message log serves as an audit trail for retracing the events that develop later into a problem. Often the database server provides in the message log the exact nature of the problem and the suggested corrective action.

Message categories

Four general categories of unnumbered messages exist. Some messages fall into more than one category.

  • Routine information
  • Assertion-failed messages
  • Administrative action needed
  • Fatal error detected

Example of routine information

Listing 5 shows an example of messages that fall into the routine information category.

Listing 5. Example of routine information messages
15:52:27  Maximum server connections 0
15:52:27  Checkpoint Statistics - Avg. Txn Block Time 0.000, # Txns blocked 0,
         Plog used 20, Llog used 12
15:52:27  Level 0 Archive started on rootdbs
15:52:28  Archive on rootdbs Completed.
15:53:07  Checkpoint Completed:  duration was 0 seconds.
15:53:07  Fri Apr  8 - loguniq 6, logpos 0xdfa018, timestamp: 0xa24db Interval: 21

Example of assertion failed

Listing 6 shows an example of messages that fall into the assertion-failed messages category.

Listing 6. Example of assertion-failed messages
18:39:07  Assert Failed: No Exception Handler
18:39:07   Who: Session(176, informix@testdb, 7263, c000000011012ef0)
		Thread(6457, xchg_3.0, c000000010feab00, 3)
		File: mtex.c Line: 491
18:39:07   Results: Exception Caught. Type: MT_EX_OS, Context: mem
18:39:07   Action: Please notify IBM Informix Technical Support.
18:39:07   See Also: /home/dump/af.1d21d84a, shmem.1d21d84a.0

Example of administrative action needed messages

Listing 7 shows an example of messages that fall into the administrative action needed messages category.

Listing 7. Example of administrative action needed messages
11:05:49  Maximum  server  connections  4
11:08:05  Logical  Log  Files  are  Full  --  Backup  is  Needed

Example of fatal error messages

Listing 8 shows an example of messages that fall into the fatal error messages category.

Listing 8. Example of fatal error messages
20:29:19  Assert Failed: Unexpected virtual processor termination, pid = 25504, exit = 0x9

20:29:19  IBM Informix Dynamic Server Version 11.70.FC1    
20:29:19   Who: Session(2, informix@, 0, 0)
                Thread(9, soctcppoll, 0, 1)
                File: mt.c Line: 14549
20:29:19  stack trace for pid 25503 written to /home/informix/1170/tmp/af.3f1739e
20:29:19   See Also: /home/informix/1170/tmp/af.3f1739e
20:29:24  mt.c, line 14549, thread 9, proc id 25503,
                  Unexpected virtual processor termination, 
                  pid = 25504, exit = 0x9
.
20:29:24  The Master Daemon Died
20:29:24  PANIC: Attempting to bring system down

Event alarms

The database server provides a mechanism for automatically triggering administrative actions based on an event that occurs in the database server environment. Events can be informative, such as backup complete, or they can indicate an error condition that requires your attention, such as unable to allocate memory. To use the event-alarm feature, set the ALARMPROGRAM configuration parameter to the full pathname of an executable file that performs the necessary administrative actions.

The database server can execute a program that operates either whenever certain noteworthy event alarms occur or every time any event alarm occurs. Noteworthy event alarms include failure of a database; table, index, chunk, or dbspace taken offline; internal subsystem failure; start-up failure; and detection of long transaction. You can receive notification of an event alarm through e-mail or pagermail.

The following configuration parameters are specific to event-alarms.

ALRM_ALL_EVENTS
Specifies whether ALARMPROGRAM runs for all events that are logged in the MSGPATH or for only specified noteworthy events
ALARMPROGRAM
Specifies the location of a file that is executed when an event alarm occurs

Customizing the ALARMPROGRAM scripts

Follow these steps to customize the alarmprogram.[sh|bat] script. You can use alarmprogram.[sh|bat] instead of log_full.[sh|bat] to automate log backups.

  1. Change the value of ADMINMAIL to the email address of the database server administrator.
  2. Change the value of PAGERMAIL to the pager service email address.
  3. Set the value of the parameter MAILUTILITY with /usr/bin/mail for UNIX and $INFORMIXDIR/bin/ntmail.exe for Windows.
  4. To automatically back up logical logs as they fill, change BACKUP to yes. To stop automatic log backups, change BACKUP to any value other than yes.
  5. In the ONCONFIG file, set ALARMPROGRAM to the full pathname of alarmprogram.[sh|bat].
  6. Restart the database server.

Alarms with a severity of 1 or 2 do not write any messages to the message log nor send email. Alarms with severity of 3 or greater send email to the database administrator. Alarms with severity of 4 and 5 also notify a pager via email.

Precautions for foreground operations in alarm scripts

To ensure continuous server availability, do not run certain foreground operations in an alarm script. When the server invokes an alarm script, the server sometimes waits for the script to complete before proceeding. For example:

  • When an alarm is invoked because of a fatal error, the server waits for the script to finish writing information to the error log. In certain situations, alarm events 5 and 6 are run in the foreground.
  • Some enterprise replication event alarms run in the foreground, such as event alarms 31, 34, 37, and 39.

Because the server might need to wait for the alarm program script to finish, do not run the following operations in the foreground in an alarm script:

  • An onmode command that forces user connections off the server, such as onmode -u or onmode -yuk. These kinds of onmode commands can cause a deadlock between the server and the alarm script because the server might wait for the alarm script to complete while the alarm script that executed the onmode command waits for the user sessions to shut down, and one of those sessions is running the alarm script itself.
  • Operations that might take a long time to complete or that have a highly variable run time. Operations that take a long time to complete can cause the server to appear as if it is not responding while the operation is running.

If you need to run the above operations in an alarm script, run them in the background using one of the following operating system utilities:

  • On UNIX, the nohup utility with nohup onmode -yuk & instructs nohup to continue running the command even if its parent terminates. The ampersand, &, runs the command in the background so it will not block execution of the alarm program script itself.
  • On Windows, use the start utility with the /B flag, such as start /B onmode -yuk.

Interpreting event alarm messages

Some of the events that the database server reports to the message log also cause it to invoke the alarm program. The class messages indicate the events that the database server reports. The database server reports a nonzero exit code in the message log. In the alarm program, set the EXIT_STATUS variable to 0 for successful completion and set the variable to another number for a failure. For example, if a thread attempts to acquire a lock, but the maximum number of locks has already been reached, the database server writes a message to the message log, as shown in Listing 9.

Listing 9. Example error message in the message log
10:37:22 Checkpoint Completed: duration was 0 seconds.
10:51:08 Lock table overflow - user id 30032, rstcb 10132264
10:51:10 Lock table overflow - user id 30032, rstcb 10132264
10:51:12 Checkpoint Completed: duration was 1 seconds.

Events in the ph_alert table

All event alarms that are generated are inserted in the ph_alert table in the sysadmin database. You can query the ph_alert table on the local or remote server to view the recent event alarms for that server. You can write SQL scripts based on the ph_alert table to handle event alarms instead of using the scripts controlled by the ALARMPROGRAM configuration parameter. By default, alerts remain in the ph_alert table for 15 days before being purged.

Listing 10 shows an event alarm in the ph_alert table:

Listing 10. Querying the ph_alerts table in sysadmin
SELECT * FROM ph_alerts WHERE alert_object_type=ALARM;

Listing 11 shows the resulting output.

Listing 11. Output of querying the ph_alerts table in sysadmin
id 34
alert_task_id 18
alert_task_seq 10
alert_type INFO
alert_color YELLOW
alert_time 2010-03-08 12:05:48
alert_state NEW
alert_state_chang+ 2010-03-08 12:05:48
alert_object_type ALARM
alert_object_name 23
alert_message Logical Log 12 Complete, timestamp: 0x8e6a1.
alert_action_dbs sysadmin
alert_action
alert_object_info 23001

Using Informix utilities to view information about the database server

This section describes the utilities to collect information about the database server activity.

The onstat utility

The onstat utility provides a way to monitor database server shared memory from the command line. The onstat utility reads data from shared memory and reports statistics that are accurate for the instant during which the command executes. Onstat provides information about session, user, locks, table profiles, buffers, and so on.

The system-monitoring interface tables (SMI tables) also provides information about the shared memory of the database server.

Monitor the database server status

The onstat output heading indicates the database server status. All onstat output includes a header. The onstat -option displays only the output header and is useful for checking the database server mode. The header has the format shown in Listing 12.

Listing 12. Header format
version--mode (Type)--(Checkpnt)--Up uptime--shmem_size Kbytes

Listing 13 shows an example header.

Listing 13. Example header format
IBM Informix Dynamic Server Version 11.70.FC1 -- On-Line -- Up 01:44:23 -- 3040  Kbytes

Table 3 shows the most common onstat options.

Table 3. Most common onstat options
OptionPurpose
-- Print the help text
-b Print buffers
-B Print all buffers
-c Print configuration file
-C Print b-tree scanner requests
-d Print spaces and chunks
-D Print spaces and detailed chunk stats
-F Print page flushers
-g <option > Print monitoring option
-G Print global transaction IDs
-k Print locks
-l Print logging
-m Print message log
-p Print profile
-P Print partition buffer summary
-R Print LRU queues
-s Print latches
-t Print TBLspaces
-T Print tablespace information
-u Print user threads
-x Print transactions
-X Print entire list of sharers and waiters for buffers
-z Zero the profile counts
-a Interpreted as onstat -mcuxskbPFhRtdGflLpO; onstat -g all; onstat -XC

Reading statistics from a source file

The onstat utility can read from a source file instead of reading from the shared memory. This file must include a shared-memory segment that you created with the onstat -o command.

Save a copy of the shared-memory segments to dmpfile using $> onstat -o dmpfile. Print locks from the dmpfile using $> onstat -k dmpfile.

Interactive execution

To put the onstat utility in interactive mode, use the -i option. Interactive mode allows you to enter multiple options without exiting the program.

Continuous onstat command execution

Use the onstat -r option combined with other onstat options to cause the other options to execute repeatedly at a specified interval.

Onstat when the server is offline

If the server is offline, you can run the onstat commands listed in Table 4.

Table 4. Onstat commands that you can run when the server is offline
onstat commandPurpose
onstat -c Displays the ONCONFIG file
onstat -m Displays the 20 most recent lines of the database server message log
onstat -g dis Displays a list of database servers on a computer
onstat -g osi Displays information about your operating system resources and parameters

The oncheck utility

The oncheck utility displays information about the database disk configuration and usage, such as the number of pages used for a table, the contents of the reserved pages, and the number of extents in a table. The oncheck utility can detect inconsistencies in disk structures and can repair some inconsistencies in disk structures.

Table 5 shows the oncheck options.

Table 5. List of oncheck options
OptionPurpose
- cr Checks reserved pages
- cR Checks reserved pages, including logical and physical logs
- ce Checks extents
- cc Checks database catalogs
- ci Checks table indexes
- cI Checks table indexes and row IDs
- cd Checks TBLspace data rows, including bitmaps
- cD Checks TBLspace data rows, including bitmaps, remainder pages, and BLOBs
- cs Checks SBLOBspace metadata partitions
- cS Checks SBLOBspace metadata partitions and LO extents
- pr Prints reserved pages
- pR Prints reserved pages, including logical and physical logs
- pe Prints extents report
- pc Prints catalog report
- pk Prints keys in index
- pK Prints keys and row IDs in index
- pl Prints leaf node keys only
- pL Prints leaf node keys and row IDs
- pd Prints TBLspace data rows
- pD Prints TBLspace data rows, including bitmaps, remainder pages, and BLOBs
- pt Prints TBLspace report
- pT Prints TBLspace disk utilization report
- pp Prints dump page for the given table
- pP Prints dump page for the given chunk number and page number
- pB Prints BLOBspace utilization
- ps Prints SBLOBspace metadata partitions
- pS Prints SBLOBspace metadata partitions and LO extents

Oncheck check and repair

The oncheck utility can repair the following types of disk structures:

  • Partition page statistics
  • Bitmap pages
  • Partition blobpages
  • Blobspace blobpages
  • Indexes
  • Sbspace pages
  • Metadata partitions for sbspaces

Use the -y option to instruct oncheck to perform repairs automatically. If you do not use the -y option, oncheck prompts you when it encounters an inconsistency and allows you to request a repair. If you specify option -n, oncheck does not perform repairs.

Oncheck and locking

The oncheck utility places a shared lock on a table during the following operations so that no other users can perform updates, inserts, or deletes until the check has completed:

  • When it checks data
  • When it checks or prints indexes, and when the table uses page locking
  • When it checks or prints indexes, and when the table uses row locking and you specify the -x option

If the table does not use page locking, the database server does not place a shared lock on the table when you check an index with the oncheck -ci, -cI, -pk, -pK, -pl, or -pL options. When no shared lock is on the table during an index check, other users can update rows during the check.

The oncheck utility cannot be as accurate in the index check when table uses row-level locking and when a shared lock places during index checks. For absolute assurance of a complete index check, you can execute oncheck with the -x option. With the -x option, oncheck places a shared lock on the table, and no other users can perform updates, inserts, or deletes until the check has completed.

Oncheck when the server is offline

Table 6 shows the oncheck commands you can run when the server is offline.

Table 6. List of oncheck commands that you can run when the server is offline
oncheck commandPurpose
oncheck -pr Displays the reserved-page information as it checks the reserved pages
oncheck -cr Checks each of the root dbspace reserved pages for several conditions
oncheck -pP Displays contents of a page; requires a chunk number and page number
oncheck -pp Displays contents of a logical page; requires a TBLspace number (partition number) and a page number

For example, the command $ oncheck -pp 0x200560 3 yields the results in Listing 14.

Listing 14. Print page three of TBLspace 0x200560
$ oncheck -pp 0x200560 3 

shared memory not initialized for INFORMIXSERVER 'ids1170uc1shm'
addr             stamp    chksum nslots flag type  frptr frcnt next   prev
2:253535         13554146 f72    12     801  DATA  1563  433   0      0

        slot ptr   len   flg
        1    24    127   0
        2    151   128   0
        3    279   128   0
        4    407   128   0
        5    535   128   0
        6    663   126   0
        7    789   126   0
        8    915   127   0
        9    1042  129   0
        10   1171  129   0
        11   1300  132   0
        12   1432  131   0
slot   1:
    0:  9 73 79 73 74 61 62 6c 65 73 69 6e 66 6f 72 6d   .systablesinform
   16: 69 78 20 20 20 20 20 20 20 20 20 20 20 20 20 20   ix
   32: 20 20 20 20 20 20 20 20 20 20  0 20  5 60  0  0             . .`..
   48:  0  1  1 f4  0 1a  0  2  0  0  0  0  0 40 50 40   ...t.........@P@
.......

Oncheck return codes

The oncheck utility returns a code on exit, as shown in Table 7.

Table 7. List of oncheck return codes on exit
Return codeMeaning of the return code
-1GLS failures
1Error connecting to database server
2Onconfig access error
2Invalid onconfig settings
2Invalid arguments to oncheck
2Error detected by oncheck
0No errors detected by oncheck
1Not properly installed (Windows-only error)
2Authentication error (Windows-only error)

The dbschema utility

The dbschema utility displays information about the database objects in a database (the schema) in SQL language. It also generates commands for reproducing storage spaces, chunks, logical logs, and physical logs. You can use the dbschema utility for the following purposes:

  • To display the distributions that the UPDATE STATISTICS statement creates.
  • To display the schema for creating objects such as databases, tables, views, sequences, synonyms, procedures, roles, and privileges.
  • To display the commands for creating storage spaces, chunks, and logs.
  • To display information about user-defined data types and row types.

Following are some examples of dbschema options.

Listing 15. Display the schema of a database without specifying an owner
$> dbschema -d h1 -nw
Listing 16. Display the commands for creating storage spaces
$> dbschema -d h1 -c

--  Dbspace 1 -- Chunk 1
--  EXECUTE FUNCTION TASK
 ('create dbspace', 'rootdbs', '/dbspaces/rootdbs', '200000', '0', '2', '500', '400');

--  Dbspace 2 -- Chunk 2
EXECUTE FUNCTION TASK
 ('create dbspace', 'dbs1', '/dbspaces/c1', '200000', '0', '2', '100', '100');

--  Physical Log
EXECUTE FUNCTION TASK
 ('alter plog', 'dbs1', '50000');
Listing 17. Display the distribution that the UPDATE STATISTICS HIGH statement creates
$> dbschema -hd customer -d stores7

Distribution for informix.customer.fname
Constructed on 2011-10-14 08:29:25.00000
High Mode, 0.500000 Resolution
--- DISTRIBUTION ---
     (          Alfred          )
  1: (  1,   1, Alfred          )
  2: (  1,   1, Anthony         )
  3: (  1,   1, Arnold          )
  4: (  1,   1, Bob             )
  5: (  1,   1, Carole          )
Listing 18. Display server-specific information
$> dbschema -d h1 -ss

create table "informix".tab1
  (
    c1 integer,
    c2 char(20)
  )
  fragment by round robin in dbs1 , dbs2 , dbs3
  extent size 16 next size 16 lock mode page;

Using the scheduler and the SQL administration API to view information about the database server

You can use the scheduler and the SQL administration API to manage automatic maintenance and to monitor administrative tasks. These components of Informix Dynamic Server enable you to simplify the collection of information and maintenance of the server in complex systems.

The scheduler

The scheduler can run administrative and maintenance tasks or can collect information at predefined times. You can use the built-in tasks and sensors, or you can write your own tasks and sensors to customize your automatic maintenance tasks. The scheduler is controlled by a set of tables and a set of stored procedures in the sysadmin database. To view, to modify components of the scheduler, or to add new components to the scheduler, you must be connected to the sysadmin database as user informix or another authorized user.

The scheduler has four different job types:

Task
Runs an action at a specific time and frequency.
Sensor
Collects and saves information at a specific time and frequency. Creates a result table to hold the information.
Startup task
Defines a task that runs only when the database server moves from quiescent mode to online mode.
Startup sensor
Defines a sensor that runs only when the database server moves from quiescent mode to online mode.

Built-in tasks and sensors

The scheduler contains built-in tasks and sensors that run automatically, as shown in Table 8.

Table 8. Built-in tasks and sensors
Task or sensor Description Enabled by default?
alert cleanupThis task removes all alert entries from the ph_alert table that are older than the threshold of 15 days.Yes
auto_crsdThis task compresses, shrinks, repacks, and defragments tables and fragments.No
autoreg exeThis task registers database extensions when they are first used.No
autoreg vpThis task creates a specialized virtual processor for a database extension as needed.No
auto_tune_cpu_vpsThis task automatically adds CPU virtual processors if the number of allocated virtual processors is less than half the number of CPU processors on the computer.No
auto update statistics evaluationThis task analyzes all the tables in all logged databases, identifies the tables whose distributions need to be updated, and generates UPDATE STATISTICS statements for those tables.Yes
auto update statistics refreshThis task runs the UPDATE STATISTICS statements generated by the auto update statistics evaluation task.Yes
bad_index_alertThis task checks for corrupted indexes.No
check_backupThis task checks to ensure that backups have run since the time specified by thresholds.Yes
check_for_ipaThis task checks tables for outstanding in-place alter operations.No
idle_user_timeoutThis task terminates user sessions that have been idle for longer than 60 minutes.No
mon_checkpointThis sensor saves information about checkpoints.Yes
mon_command_historyThis task deletes rows from the command_history table that are older than the threshold of 30 days.Yes
mon_configThis sensor saves the most recent value for each configuration parameter in the onconfig file.Yes
mon_config_startupThis sensor saves the value for each configuration parameter in the onconfig file when the server starts.Yes
mon_low_storageThis task scans the list of dbspaces to find spaces that fall below the threshold specified by the SP_THRESHOLD configuration parameter. Then, the task expands the spaces by extending chunks or adding chunks.Yes
mon_memory_systemThis sensor collects server memory consumption information.Yes
mon_profileThis sensor saves server profile information.Yes
mon_sysenvThis startup sensor saves the database server startup environment.Yes
mon_table_namesThis sensor saves table names along with their creation time.Yes
mon_table_profileThis sensor saves table profile information.Yes
mon_usersThis sensor saves profile information about each user.Yes
mon_vpsThis sensor collects virtual processor information.Yes
online_log_rotate This task rotates the online message log file that is specified in the MSGPATH configuration parameter.No
post_alarm_messageThis task posts alerts.Yes

You can change built-in task and sensor properties by updating the ph_task table. You can enable or disable tasks by updating the field tk_enable in the ph_task table under the sysadmin database. Listing 19 shows an example of updating the ph_task table by showing how to enable the task check_for_ipa to check for outstanding in-place alter operations.

Listing 19. Enable the task check_for_ipa to check for outstanding in-place alter operations
UPDATE ph_task set tk_enable = "t" where tk_name = "check_for_ipa"

Listing 20 shows how to configure the task mon_profile to run every 10 hours (instead of running every 4 hours) and to delete data older than 20 days (instead of deleting data older than 30 days) in the resulting table.

Listing 20. Configure the task mon_profile
UPDATE ph_task 
set (tk_frequency,  tk_delete) = 
          (INTERVAL (10) HOUR to HOUR,  INTERVAL (20) DAY TO DAY)
where tk_name = "mon_profile"

Create a new task or a new sensor

Each task requires the following:

  • A name for the task
  • A description of the task
  • An action that the task performs, which can be a user-defined function or an SQL statement

Each sensor requires the following:

  • A name for the sensor
  • A description of the sensor
  • An action that the sensor performs, which can be a user-defined function or an SQL statement
  • A name of a table that will hold the resulting data
  • A CREATE TABLE statement for the table that will hold the resulting data

To create a task or a sensor, use the INSERT statement to add a new row to the ph_task table. For example, Listing 21 shows how to create a task that tries to free shared memory each day at 10:00.

Listing 21. Create a task that tries to free shared memory each day at 10:00
insert into ph_task (tk_name, tk_description, tk_type, tk_execute, tk_start_time)
values
("Free Memory",
"free unused shared memory segments",
"TASK",
"EXECUTE FUNCTION task ('onmode', 'F');",
DATETIME (10) HOUR TO HOUR);

Create a threshold

A threshold value can be compared to a current value to determine whether a task or sensor should be run. To create a threshold, use the INSERT statement to add a new row to the ph_threshold table. Write the task or a sensor action to use the threshold.

Create an alert

To create an alert, use the INSERT statement to add a new row to the ph_alert table. Write the task or a sensor action to create an alert.

Create a group

You can use groups to organize scheduler tasks and sensors. To create a group, use the INSERT statement to add a new row to the ph_group table.

Modify the scheduler

You can modify the properties of scheduler tasks, sensors, alerts, groups, or thresholds. You can modify both built-in properties and properties that you added. To modify the properties, use the UPDATE statement for the appropriate scheduler table.

Monitor the scheduler

The scheduler has the following two types of threads:

  • dbWorker is executing scheduled tasks and sensors
  • dbScheduler is preparing the next task or sensor that is scheduled to run

You can view currently running tasks and sensors with the onstat -g dbc command. Listing 22 shows an example of the resulting output.

Listing 22. Output of onstat -g dbc
    	Worker Thread(0)    46fa6f80         
=====================================
Task:                4729fc18         
Task Name:           mon_sysenv
Task ID:             4
Task Type:           STARTUP SENSOR                                         
Task Execution:      insert into mon_sysenv select 1, env_name, env_value FROM 
                     sysmaster:sysenv


WORKER PROFILE
    Total Jobs Executed             3
    Sensors Executed                2
    Tasks Executed                  1
    Purge Requests                  2
    Rows Purged                     0


Scheduler Thread    46fa6f80         
=====================================
Run Queue
    Empty
Run Queue Size        0
Next Task             7
Next Task Waittime    57

You can view information about the tasks and sensors that have completed by querying the ph_run table, as shown in Listing 23.

Listing 23. Query the last execution time and the returncode of the task Free Memory
select ph_task.tk_name, ph_run.run_retcode, ph_run.run_time
from ph_run, ph_task
where ph_task.tk_name = "Free Memory"
and ph_run.run_task_id = ph_task.tk_id;

The resulting output is shown in Listing 24.

Listing 24. See the last execution time and the returncode of the task Free Memory
tk_name             run_retcode run_time
Free Memory                   0 2011-04-05 10:40:19
1 row(s) retrieved.

Start and stop the scheduler

Use the SQL administration API to start and stop the scheduler. Start the scheduler using EXECUTE FUNCTION task("scheduler start");. Stop the schedule using EXECUTE FUNCTION task("scheduler stop");.

Automatically optimizing data storage

The built-in task auto_crsd enables you to do automatic compressing, shrinking, repacking, and defragmenting of tables and extents. This task is disabled by default. To enable this task, update the table ph_task, and set tk_enable to t.

You can control this task by updating rows in the table ph_threshold. Update the rows for AUTOCOMPRESS_ENABLED, AUTOREPACK_ENABLED, AUTOSHRINK_ENABLED, and AUTODEFRAG_ENABLED control if this task is doing compression, repacking, shrinking, or defragmenting.

For example, if you want the task to do compression and defragmenting but not repacking and shrinking, then UPDATE the ph_threshold table as shown in Listing 25.

Listing 25. UPDATE statements
UPDATE ph_threshold
  SET value = 'T'
  WHERE name = 'AUTOCOMPRESS_ENABLED';
UPDATE ph_threshold
  SET value = 'F'
  WHERE name = 'AUTOREPACK_ENABLED';
UPDATE ph_threshold
  SET value = 'F'
  WHERE name = 'AUTOSHRINK_ENABLED';
UPDATE ph_threshold
  SET value = 'T'
  WHERE name = 'AUTODEFRAG_ENABLED';

The following rows are the thresholds for when compression, repacking, shrinking, or defragmenting occur:

  • AUTOCOMPRESS_ROWS is the number of uncompressed rows in a table.
  • AUTOREPACK_SPACE is the percentage of noncontiguous space in a table.
  • AUTOSHRINK_UNUSED is the percentage of unused, allocated space in a able or a fragment.
  • AUTODEFRAG_EXTENTS is the number of extents of a table or a fragment.

Automatic space management

The built-in task mon_low_storage enables adding more storage space automatically when more space is needed. This task is enabled by default. This task uses information from the storage pool to automatically expand an existing dbspace, temporary dbspace, sbspace, temporary sbspace, or blobspace.

You can control the task by changing the onconfig parameters SP_AUTOEXPAND and SP_THRESHOLD. Set SP_AUTOEXPAND to 0 or SP_THRESHOLD to 0 to disable the automatic expansion of storage spaces. Set SP_THRESHOLD to a value to define the minimum amount of free space that can exist in a storage space before the task expands the space.

SQL administration API

The SQL administration API enables you to perform remote administration using the SQL function in client tools. You can perform administrative tasks such as updating configuration parameters, checking data, and configuring storage spaces. You can also complete many of the tasks using the various administrative utilities of Informix.

The SQL administration API consists of two built-in functions: admin() and task(), which are defined in the sysadmin database and invoked with the EXECUTE FUNCTION statement. You must be connected to the sysadmin database to run these functions.

These functions take one or more arguments to define the operation, as shown in Table 9.

Table 9. Administrative tasks you can perform with the SQL administration API
Administrative task Utility
Compress data and optimize storageNone
Update configuration parametersonmode
Manage data, partitions, and extentsonmode, oncheck -c, oncheck -p, onspaces
Manage enterprise replication cdr
Manage high-availability replicationonmode -d,ontape
Control listen threadsonmode -P
Manage logical and physical logsonparams, ondblog
Manage memoryonmode, onparams
Manage mirroringonspaces
Manage PDQonmode
Change the server modeonmode
Manage storage spacesonspaces
Manage storage spaces from a storage poolNone
Manage SQL statement cacheonmode
Manage SQL tracingSQLTRACE onconfig parameter
Manage SQL tracing for a user, a session, or a databaseNone

Following are examples of how to use the SQL administration API.

Listing 26 shows the SQL statement that checks the extents. This is equivalent to the oncheck -ce command.

Listing 26. SQL statement that checks extents
EXECUTE  FUNCTION  admin("check  extents");

Listing 27 shows the SQL statement that creates a physical log in the dbspace physdbs with a size of 100 MB. This is equivalent to the onparams -p command.

Listing 27. SQL statement that creates a physical log in the dbspace physdbs with a size of 100 MB
EXECUTE FUNCTION task ("alter plog","physdbs","100 M");

Listing 28 shows the SQL statement that switches the current logical-log file to the next logical-log file. This is equivalent to the onmode -l command.

Listing 28. SQL statement that switches the current logical-log file to the next logical-log file
EXECUTE FUNCTION task ("onmode", "l");

Command history table

The command_history table in the sysadmin database contains information about all the SQL administration API commands executed on this server. The command_history table shows each SQL administration API function that was run and displays information about the user who ran the function, the time the function was run, the primary arguments of the function, and the message returned when the database server finished running the function.

Listing 29 shows the information from the command_history table about the onmode command that is executed in the Free Memory task.

Listing 29. Information from the command_history table
cmd_number      110
cmd_exec_time   2011-04-05 10:40:19
cmd_user        informix
cmd_hostname    NA
cmd_executed    onmode
cmd_ret_status  0
cmd_ret_msg     OK

The built-in task mon_command_history controls the size of the table. This task is enabled by default and deletes rows in this table that are older than 30 days. You can update the table ph_threshold to reduce this time period.

Listing 30 shows how to update the ph_threshold table to delete all rows older than 20 days.

Listing 30. Updating the ph_threshold table to delete all rows older than 20 days
UPDATE ph_threshold
SET value = "20"
WHERE name = "COMMAND HISTORY RETENTION";

Sysadmin database

The sysadmin database contains the tables that contain and organize the scheduler tasks and sensors, store data collected by sensors, and record the results of scheduler jobs and SQL administration API functions.

The sysadmin database contains the following:

  • The scheduler tables
  • The result tables
  • The storage pool table
  • The command_history table
  • The built-in task() function
  • The built-in admin() function

Table 10 shows the tables in the sysadmin database and describes their purposes.

Table 10. Tables in the sysadmin database
TableDescription
ph_alertContains a list of errors, warnings, or informational messages associated with tasks that must be monitored.
ph_groupContains a list of group names.
ph_runContains information about how and when each task and sensor was run.
ph_taskLists tasks and sensors, and contains information about how and when the database server will run them.
ph_thresholdContains a list of thresholds that are associated with tasks or sensors.
result tablesOffers multiple tables that contain historical data collected by sensors. The structure of these tables is determined by the CREATE TABLE statement in the sensor definition in the ph_task table.
storagepoolContains information about all of the entries in the storage pool. Each entry represents free space that the server can use when automatically expanding a storage space.
command_historyContains the list and results of all the SQL administration API functions that were run.

The scheduler tables and sensor results tables can consume significant amounts of disk space. You can move the sysadmin database from its default root dbspace location to a different dbspace using the SQL administration API, as shown in Listing 31. Moving the sysadmin database resets the database back to its original state when it was first created, which means all data, command history, results tables, and user-defined tasks are lost.

Listing 31. Move the sysadmin database to the dbspaces dbs1
EXECUTE FUNCTION task("reset sysadmin","dbs1");

Understanding concurrency and locking

When using a multiple thread or multiple process architecture, concurrency is important. Accessing the data in a single-threaded manner causes bottlenecks, as processes or threads have to wait for resources. This can slow processing dramatically. It is more efficient to have multiple applications connecting in parallel to the database.

The database server imposes a system of locks. A lock is a claim, or reservation, that a program can place on a piece of data. The database server guarantees that, as long as the data is locked, no other program can modify it. When another program requests the data, the database server either makes the program wait or turns it back with an error.

Table 11 shows the types of locks that Informix database servers support for different situations.

Table 11. Lock types and their usage
Lock typeUse
SharedA shared lock reserves its object for reading only. It prevents the object from changing while the lock remains. More than one program can place a shared lock on the same object. More than one object can read the record while it is locked in shared mode.
ExclusiveAn exclusive lock reserves its object for the use of a single program. This lock is used when the program intends to change the object. You cannot place an exclusive lock where any other kind of lock exists. After you place an exclusive lock, you cannot place another lock on the same object.
Promotable (or update)A promotable (or update) lock establishes the intent to update. You can only place it where no other promotable or exclusive lock exists. You can place promotable locks on records that already have shared locks. When the program is about to change the locked object, you can move the promotable lock to an exclusive lock, but only if no other locks, including shared locks, are on the record at the time the lock would change from promotable to exclusive. If a shared lock was on the record when the promotable lock was set, you must drop the shared lock before the promotable lock can be moved to an exclusive lock.

Isolation level

The number and duration of locks placed on data during a SELECT statement depend on the level of isolation that the user sets. The type of isolation can affect overall performance, because it affects concurrency. Table 12 lists the isolation level types and their usage.

Table 12. Isolation level types
Isolation level typeUse
Dirty read isolation The dirty read isolation (or ANSI read uncommitted) level does not place any locks on any rows fetched during a SELECT statement. Dirty read isolation is appropriate for static tables that are used for queries.
Committed read isolation A reader with the committed read isolation (or ANSI read committed) isolation level checks for locks before returning a row. By checking for locks, the reader cannot return any uncommitted rows.
Cursor stability isolation A reader with cursor stability isolation acquires a shared lock on the row that is currently fetched. This action assures that no other user can update the row until the user fetches a new row.
Repeatable read isolation Repeatable read isolation (ANSI serializable and ANSI repeatable read) is the strictest isolation level. With repeatable read, the database server locks all rows examined (not just fetched) for the duration of the transaction.

Using WITH CONCURRENT TRANSACTION

The WITH CONCURRENT TRANSACTION clause enables you to switch to a different connection while a transaction is still active in the current connection. To switch to another database while a transaction is still active, you must have the WITH CONCURRENT TRANSACTION clause. Without this clause, you cannot switch to a different connection if a transaction is active; the CONNECT or SET CONNECTION statement fails and returns an error. The current transaction continues to be active in the active connection.

The WITH CONCURRENT TRANSACTION clause supports the concept of multiple concurrent transactions. This way, each connection can have its own transaction. The COMMIT WORK and ROLLBACK WORK statements affect only the current connection. Global transactions, in which one transaction spans multiple databases over multiple connections, is not supported by the WITH CONCURRENT TRANSACTION clause. The COMMIT WORK and ROLLBACK WORK statements do not act on databases across multiple connections.

Concurrency and locks

There is always the possibility that, while one client is modifying data, another client will read or try to modify the same data. Two or more clients trying to access the data are said to be accessing it concurrently. A multiuser database system requires a high level of concurrency. Concurrency can lead to a variety of problems if there is not some sort of methodology to control access to that data. A client could read data that has been modified or possibly deleted, or an update could be lost in the shuffle.

Depending on the SET LOCK MODE setting, if another client requests the data, the database server either makes the other client wait or tells the client the resource is not available, generating an error. To control the effect that locks have on your data access, use a combination of SQL statements SET LOCK MODE and either SET ISOLATION or SET TRANSACTION.

Committed read with last committed isolation level

If the isolation level is COMMITTED READ (and SET LOCK MODE is not set to WAIT), locks that are held by other sessions can cause SQL operations to fail if the current session cannot acquire a lock. Another situation that even setting SET LOCK MODE to WAIT will not help is a deadlock. A deadlock occurs when two users are holding locks that the other wants. The LAST COMMITTED keyword option on the SET ISOLATION COMMITTED READ statement of SQL reduces the risk of locking conflicts. This syntax instructs the server to return the most recently committed version of the rows, even if another concurrent session holds an exclusive lock. You can use the LAST COMMITTED keyword option for B-tree indexes, functional indexes, logged tables, and tables that do not have page-level locking or exclusive locks.

USELASTCOMMITTED ONCONFIG PARAMETER

USELASTCOMMITTED specifies the isolation level for which the LAST COMMITTED feature of the COMMITTED READ isolation level is implicitly in effect. Following are the values for USELASTCOMMITTED:

None
No isolation level identified
Committed read
All transactions from a committed read isolation level
Dirty read
All transactions from a dirty read isolation level
All
Both committed read and dirty read isolation levels

As with an onconfig parameter, changing its value will only take effect when reinitializing shared memory.

Using onmode to affect isolation level

When the database server is in online mode, you can use the onmode -wm and onmode -wf options to change the values of the USELASTCOMMITTED configuration parameter. To specify a new value for either configuration parameter for the current session, use onmode -wm USELASTCOMMITTED=value.

To change the value of either configuration parameter in the ONCONFIG file, use onmode -wf USELASTCOMMITTED=value.

SET ENVIRONMENT SQL statement

The SET ENVIRONMENT SQL statement can specify options at runtime that affect subsequent queries submitted within the same routine. This is an extension to the ANSI/ISO standard for SQL. Following are the settings for USELASTCOMMITTED that correspond to the onconfig parameters:

None
No isolation level identified
Committed read
All transactions from a committed read isolation level
Dirty read
All transactions from a dirty read isolation level
All
Both committed read and dirty read isolation levels

Committed read and committed read last committed isolation level on secondary servers

Committed read and committed read last committed isolation levels are supported on all types of secondary servers. Secondary servers on which committed read isolation is set can read locally committed data. They can also read data committed on the primary server when it becomes available and committed on the secondary server.

If the UPDATABLE_SECONDARY configuration parameter is disabled, a secondary data replication server is read-only. In this case, committed read and committed read last committed isolation levels are not available on secondary servers.


Exploring tools specific to each operating system

UNIX tools

This section describes the tools available only for the UNIX operating system.

System activity reporter (sar)

The sar command is useful for monitoring CPU utilization, disk activity, and memory utilization. The example in Listing 32 monitors CPU utilization at intervals of 3 seconds for 10 iterations.

Listing 32. sar example
sar -u 3 10

SunOS IFXSUN 5.10 Generic_118833-36 sun4u    04/08/2011

16:42:06    %usr    %sys    %wio   %idle
16:42:09      27      26       0      47
16:42:12      26      25       0      49
16:42:15      26      24       0      50
16:42:18      30      29       0      41

To identify potential or actual performance bottlenecks or problems, it is necessary to continuously monitor your system and capture these data points over an extended period of time. The UNIX sar command is particularly useful for creating and maintaining long-term performance histories.

sar is available on all IBM Informix supported hardware platforms. When run with a named output file, a single binary-encoded file is created from which all sar reports can be generated at a later time.

time or timex

The time and timex commands enable you to time the running of a process. Both time and timex report on real time as well as user and system CPU time.

ps

The process status (ps) command is a good source of snapshot information about system processes currently running, as shown in Listing 33.

Listing 33. Process status (ps)
ps -el

 F S    UID   PID  PPID   C PRI NI     ADDR     SZ    WCHAN TTY         TIME CMD
 1 T      0     0     0   0   0 SY        ?      0          ?           0:39 sched
 0 S      0     1     0   0  40 20        ?    300        ? ?           0:27 init
 1 S      0     2     0   0   0 SY        ?      0        ? ?           0:00 pageout
 1 S      0     3     0   0   0 SY        ?      0        ? ?         975:53 fsflush
 0 S      0   107     1   0  40 20        ?   1452        ? ?           1:20 snmpd
 0 S      0     7     1   0  40 20        ?   1575        ? ?           1:44 vc.star
 0 S      0     9     1   0  40 20        ?   1286        ? ?           1:51 vc.conf
 0 S      1   245     1   0  40 20        ?    332        ? ?           0:00 statd
 0 S      0  6244  6228   0  40 20        ?   7729        ? ?           0:52 oninit

iostat

The iostat command provides highly accurate measures of throughput, utilization, queue lengths, transaction rates, and service times, as shown in Listing 34.

Listing 34. The iostat command
iostat -x 5 1
                  extended device statistics                   
device       r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b 
sd1          0.0    1.1    0.4    1.4  0.0  0.0   34.6   0   1 
sd2          0.0    0.9    0.6    9.8  0.0  0.0   19.4   0   1 
sd3          0.0    0.0    0.0    0.0  0.0  0.0   20.0   0   0 
sd4          0.0    0.0    0.2    0.7  0.0  0.0    7.3   0   0 
sd5          0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0 
nfs1         0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0

vmstat

The vmstat command provides information about the status of processes, memory utilization, paging statistics, system activity, and CPU usage, as shown in Listing 35.

Listing 35. The iostat command
vmstat 3 5
 kthr      memory            page            disk          faults      cpu
 r b w   swap  free  re  mf pi po fr de sr s1 s2 s3 s4   in   sy   cs us sy id
 0 0 0 19930448 5681776 519 718 1 1 1 0  0  1  1  0  0  361 527535 598 10 10 80
 0 0 0 17964488 4777560 1 10 0  0  0  0  0  0  0  0  0  457 1597198 836 26 24 49
 0 0 0 17964488 4777560 0 0  0  0  0  0  0  0  0  0  0  451 1596260 824 26 24 49
 0 0 0 17955984 4773040 1698 2593 0 0 0 0 0 1  0  0  0  477 1598837 901 30 29 41
 0 0 0 17964488 4777560 0 0  0  0  0  0  0  0  1  0  0  461 1594752 823 26 25 49

Windows tools

This section describes the tools available only for the Windows operating system.

Windows event viewer

The event viewer shows informational, warning, and error messages for the operating system, other applications, and the database server. To display database server messages on Windows, do the following:

  1. Choose Administrative Tools > Event Viewer.
  2. Choose Log > Security.
  3. Double-click any event for a more detailed message.

The following Informix utilities simplify administration of the database server on Windows.

ixpasswd.exe

The ixpasswd.exe utility changes the logon password for all services that log on as user informix. You can change the password interactively or on the command line using the -y option: ixpasswd [-y new_password]. This utility saves having to manually change the password for each service whenever you change the informix password.

If you are logged on locally and run ixpasswd, it changes the password for services that log on as the local informix user. If you are logged on domain and run ixpasswd, it changes the password for services that log on as domain\informix.

ixsu.exe

The ixsu.exe utility launches a command-line window that runs as the specified user. The user is a local user unless you specify a domain name in the format ixsu [[domain\]username]. If you do not specify a user name, the default user is informix. You no longer need to log off as the current user and log on as informix to perform database administration tasks that need to be run as informix. The ixsu utility requires advanced user rights to act as part of the operating system.

To configure advanced user rights on Windows NT, do the following:

  1. Select User Manager > Policies > User Rights.
  2. Check the Advanced User Rights option.

Note that if you change the advanced user rights for the current user, you need to log off and log back on for the new rights to take effect.

The ixsu utility is equivalent to the Windows 2000 runas command. To use runas to run a command shell as another user, use runas /user:username cmd

ntchname.exe

The ntchname.exe utility changes the registry entries for Informix Dynamic Server from the old hostname to the new hostname. Run ntchname after you change the hostname. This utility does not change user environment variables. After you execute ntchname, edit the %INFORMIXDIR%\%INFORMIXSERVER%.cmd file and change the INFORMIXSQLHOSTS entry to the new hostname, using the format ntchname old_name new_name.


Conclusion

This tutorial introduced you to the set of monitoring tools that are available with Informix 11.70. You learned how each tools is used to monitor a database system. Database monitoring is a vital activity that, performed on a regular basis, provides continuous feedback on the health of a database system.

The SQL administration API enables you to perform remote administration using various, specific SQL commands for tasks such as managing spaces, managing configuration, running routine jobs, and system validation.

The scheduler manages and executes scheduled maintenance, performs monitoring, and completes administration tasks. This tool enables you to monitor activities, including space management and automatically backing up any new log data at timed intervals since the last log backup, and to create corrective actions that run automatically.

The system-monitoring interface (SMI) tables are special tables managed by the database server that contain dynamic information about the state of the database server. You can use SELECT statements on them to determine almost anything you might want to know about your database server.

The onstat utility provides a way to monitor database server shared memory from the command line. The onstat utility reads data from shared memory and reports statistics that are accurate for the instant during which the command executes. That is, onstat provides information that changes dynamically during processing, including changes in buffers, locks, indexes, and users.

The oncheck utility displays information about the database disk configuration and usage, such as the number of pages used for a table, the contents of the reserved pages, and the number of extents in a table.

The database server message log is an operating system file. You can read the database server message log for a minute-by-minute account of database server processing in order to catch events before a problem develops.

Event alarm feature is the mechanism for automatically triggering administrative actions based on an event that occurs in the database server environment.

This tutorial describes isolation level, concurrency, and how to adjust them, which is useful in an environment with multiple databases and database servers.

Resources

Learn

Get products and technologies

  • Download free-of-charge software, including Informix Innovator-C Edition for small workload productions and a trial version of Informix Ultimate Edition.
  • Build your next development project with IBM trial software, available for download directly from developerWorks.

Discuss

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into Information management on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Information Management
ArticleID=781999
ArticleTitle=System Administration Certification exam 919 for Informix 11.70 prep, Part 3: System activity monitoring
publish-date=12282011