Use the Informix Dynamic Server scheduler and SQL API

Take advantage of autonomous and remote administration features

IBM® Informix® Dynamic Server 11 (IDS 11) offers new features to assist in automating administration and maintenance tasks. Learn how the new scheduler and SQL API make a DBA's daily tasks easier.

Share:

George D. Simpson (geosimps@us.ibm.com), Software Engineer, IDS quality assurance, IBM

George Simpson joined IBM in 2001 as part of the Informix acquisition. He has worked in quality assurance in areas covering interoperability, backup and restore, client and server QA, EGL, and stress testing.



27 September 2007

Also available in Japanese

Introduction to the new database administration features

With the release of IBM IDS 11, a new set of highly anticipated features form the foundation for a self-managing RDBMS. These tools allow the DBA to collect server information and adjust the database server using SQL as a remote administration tool. Additionally, a scheduler allows those same SQL API commands to be scheduled for execution, much like a UNIX® cron job. IBM IDS further extends its value by adding reduced costs for administration and problem determination to its already lauded reputation as the administration-free database.

The administration enhancements are implemented as:

  • A new system database (sysadmin)
  • The database scheduler
  • An administrative SQL command API
  • The SQLTRACE feature
  • ONCONFIG simplification

Become familiar with the feature concepts and troubleshoot them. Learn, also, the basics of automating tasks using the database administration system. This article includes detailed syntax for using these features.


New for your system

New administration options

Besides the scheduler, the SQL API provides a programmatic method to execute most of the functions of the current administration tools.

What the SQL API can and cannot do:

IBM IDS now provides an SQL interface for the finderr, oninit, onmode, onspaces, onparams, ondblog, oncheck (-c options only), onlog, and onstat utilities. Almost all of those administration utility functions can be integrated into a single SQL script and executed on a predetermined or ad-hoc schedule. Engine setup and administration tasks are now auditable with results being tracked within the engine.

There are no SQL API equivalents to perform the start-up functions, shutdown functions, onaudit, oncheck –p, dbschema, onperf, ER admin tasks, any of the data backup/restore, or the data load/unload/export utilities.

Access to the sysadmin database is initially restricted to the "informix" user, but permission may be extended to any users in the DBSA group or system administrator. If you have not added the correct permissions for the user or the session user ID, then SQL API commands will fail with error "-387 no connection permission". As DBA, if you decide to extend insert privileges to a user on the ph_task table, be careful about to whom you grant permissions -- those tasks/monitors run as the informix user, and unintentional access may be granted to other features. It is strongly recommended to not give direct user access to ph_task, but to instead move a trusted user to the DBSA group.

New environment parameters

There are no new session environment variables added for this feature.

New ONCONFIG parameters

A new ONCONFIG variable has been created for IBM IDS 11 for controlling the SQL trace facility. SQLTRACE allows you to monitor recently executed SQL statements and retrieve various performance metrics about the query.

This optional SQLTRACE parameter takes four parameters:

  1. level: This sets the amount of information traced. The default is "off" and accepts values of "low," "med," and "high."
    • The LOW level traces the following SQL statement information:
      • User ID
      • Session ID
      • Current database name
      • Statement type
      • Statement execution time
      • RSAM stats (buffer and page read/writes, sorts, lock requests and waits, logical log records and index buffer reads)
      • Connection block stats (estimated rows returned, estimated cost, number of rows returned, and statement type)
      • Database isolation level
    • The MED (medium) recording level traces the same details as LOW, but adds a list of database names, tables affected by the statement, and store procedure stack traces.
    • The HIGH level records the same as the MED level, but adds host variable information to the end.
  2. mode: Determines if the server is tracing information for all users (default is "global") or a specified user.
  3. ntraces: Specifies the number of SQL operations recorded in the sysadmin@command_history table. This is a circular log, and old data will be over-written as the upper limit of ntraces is reached.
  4. size: The size of the row that can be stored in KB.

Example:

SQLTRACE    level=HIGH, mode=user, ntraces=1000, size=1000

This information is stored in sysmaster@syssqltrace_info, sysmaster@syssqltrace, and sysmaster@syssqltrace_iter. Only user "informix" has access to these tables.

Another ONCONFIG-related change is the improved handling of configuration values. The $INFORMIXDIR string can now be placed at the beginning of the value and is expanded. In order for the value to be resolved, the configuration value must begin with $INFORMIXDIR; $INFORMIXDIR will not be expanded if it is not at the beginning.

Example:

ROOTPATH		$INFORMIXDIR/rootdbs

New syntax

The SQL API is implemented as a set of UDRs: "admin" and "task." The routines can be delivered to the server by any programming languages with database connectivity, such as, JDBC, PHP, or dbaccess. Another way to execute SQL API commands is by utilizing the new database scheduler. SQL administration statements take the form of:
EXECUTE FUNCTION [task | admin] ("command task", arg1, arg2, …)

Example:

EXECUTE FUNCTION TASK ("SET SQL TRACING ON");
EXECUTE FUNCTION ADMIN ("SET SQL TRACING OFF");

The "admin" UDR is more likely to be utilized in scripts or programs, as it returns a positive or negative integer, which indicates success or failure of the command. A positive integer indicates success, while a negative indicates failure. The absolute value of the returned integer is a link to the primary key -- the cmd_number column, in the sysadmin@command_history table. Information about the execution of a command can identified from this table; specifically, the real return message can be queried from the cmd_ret_msg column.

The "task" UDR is more friendly to the DBA, as it is a descriptive message that indicates success or failure of the command executed. It is immediately understandable since it returns an lvarchar containing the statement delivered to the server as well as the server response, error message, or both.

Where appropriate, the task and admin UDRs accept a variety of double-quoted file and data size abbreviations. Instead of calculating a two-gigabyte file space in kilobytes, you can now simply provide "2GB" or "2000MB". Available options are "B," "KB," "MB," "GB," "TB," and "PB." The default size is "KB."

Commands can also accept environment variables in any path argument. The environment variable may only be at the beginning of the path for it to be expanded. The recognized environment variables are those known to the server, not the user session.

Example:

EXECUTE FUNCTION TASK ("CREATE DBSPACE", "dbs1", "$INFORMIXDIR/dbs1", "1GB");

Checking the results

The UDRs recreate much of the existing administrative tasks in an SQL API. Logically, they also return many of the existing IBM Informix return codes and errors. All activities performed with the database's administration system are logged to the IDS log file. More detailed data and results are accumulated in the sysadmin@command_history table.

Table 1. The command_history table
Column Type Description
cmd_numberserial
cmd_exec_timedatetime year to secondThe time the command was started.
cmd_uservarcharThe user executing the command.
cmd_hostnamevarcharThe hostname where the command was initiated.
cmd_executedvarchar The command executed.
cmd_ret_statusintegerReturn code for cmd_executed.
cmd_ret_msglvarcharReturn message for cmd_executed.

Determine a problem from an SQL API-initiated task or script using an SQL query.

Example:

select cmd_number, cmd_ret_msg from command_history where cmd_number = <return>;

If you issue an invalid command:

EXECUTE FUNCTION ADMIN ("CREATE ASPACE","dbs88"
,"/dev/raw99", "2GB","4GB",2);

it might return "-161".

So, issue:

select cmd_number, cmd_ret_msg from command_history where cmd_number =161

to retrieve the command number and error message:

cmd_number   161
cmd_ret_msg  Unknown command (CREATE ASPACE).

Suppose, in the next command you correct ASPACE with DBSPACE, but provide an invalid disk path. Querying for all data with cmd_number "162" yields what command was issued, by whom, when, and from where.

Listing 1. Sample results from command_history
cmd_number      161
cmd_exec_time   2007-07-03 14:29:03
cmd_user        informix
cmd_hostname    mymachine
cmd_executed    CREATE DBSPACE
cmd_ret_status  -1
cmd_ret_msg     Unable to create file /dev/raw99 Error 13.
File creation failed for dbspace dbs88.

Be sure to use a positive integer in the above query. A negative value in cmd_ret_status signifies that a problem occurred, but the cmd_number in the sysadmin is greater than zero. There will be no negative numbers in that column to search.


Monitor your system

Sensors and tasks

While the SQL API works within a shell script, or as an SQL or JDBC program, the data server now provides two additional means to take action using the SQL API and the scheduler. Both types, when created, are installed into the ph_task table in the sysadmin database. What they do and how they respond is determined by whether you create them as "sensors" or "tasks."

A "sensor" allows the DBA to regularly record server data to maintain a historical record of activity on the server. There are two types of sensors: "SENSOR" and "STARTUP SENSOR." A startup sensor is the same as a regular sensor, but only runs when the server starts. A sensor only records data and leaves any reaction to the DBA.

"Tasks" allow the DBA to analyze real-time server data and automatically make adjustments. Such tasks can be as simple as scheduling a time to run update statistics or as complex as watching for system changes and determining for itself when to update statistics or add logs. A monitor has some logic included that allows the server to monitor itself and take a corrective action. Similarly to sensors, there are two types of tasks: "TASK" and "STARTUP TASK". A startup task is a task that runs only at server start time.

The ph_task table is created:

Table 2. The ph_task table
Column Type Purpose User entries
tk_idserialThis is a system-assigned unique task identifier. DO NOT MODIFY, system maintained.
tk_namechar(36)A user-provided task name.
This must be unique.
A name for the task.
Example:
"mon_table_profile"
tk_descriptionlvarcharA description about this task. A long description of the task created.
Example:
"Collect SQL profile information by table/fragment. Index information is excluded from this collection."
tk_typechar(18)The type of task. "SENSOR" |
"STARTUP SENSOR" |
"TASK" |
"STARTUP TASK"
tk_sequenceintegerThe number of times this task has been executed. DO NOT MODIFY, system maintained.
tk_result_tablevarcharSensor only. The table name where results are kept. Example:
"mon_table_profile"
tk_createvarcharSensor only. The SQL statement to create the table named on tk_result_table.An SQL create table statement.
tk_dbsvarcharThe database to execute the SQL action in. Must be a logging database. Example:
"sysadmin"
tk_executelvarcharThe insert/select statement or stored procedure to execute.An SQL statement or stored procedure, executed by the scheduler with results placed in tk_results_table.
tk_deleteinterval day to second Sensor only. Delete any data older than this time.NULL or the duration these results should be kept. Purged at each system startup.
tk_start_timedatetime hour to secondDaily start time for this task. NULL enables this to run always. DATETIME(6:00:00)HOUR TO SECOND
tk_stop_timedatetime hour to secondDaily stop time for this task. NULL means never stop this task.DATETIME(18:00:00)HOUR TO SECOND
tk_frequencyinterval day to secondHow often this task executes.INTERVAL ( 30 ) MINUTE TO MINUTE
tk_next_executiondatetime year to secondThe time when this sensor scheduled to next execute.DO NOT MODIFY, system updated.
tk_total_executionsintegerThe number of times the task has been run.DO NOT MODIFY, system updated.
tk_total_timefloatThe total time spent executing this task.DO NOT MODIFY, system updated.
tk_MondaybooleanReserved for future use.
tk_TuesdaybooleanReserved for future use.
tk_WednesdaybooleanReserved for future use.
tk_ThursdaybooleanReserved for future use.
tk_FridaybooleanReserved for future use.
tk_SaturdaybooleanReserved for future use.
tk_SundaybooleanReserved for future use.
tk_attributesintegerState of the current taskDO NOT MODIFY, system updated.
tk_groupvarchar
[foreign key]
Group name references ph_group (group_name).
tk_enableBooleanIndicates if this task is active or disabled. If the value is false this task will never execute.

Adding a sensor or monitor is as simple as giving an SQL statement. The information you will need to start is:

  • A test description of your creation
  • The name of a table to store results in
  • The statement to create the results table
  • A valid SQL statement, compound SQL statement, SPL, or UDR to execute
  • How frequently you want to execute the SQL/SPL/UDR

For example, the SQL to insert the mon_sys_env sensor looks like this:

Listing 2. SQL to insert the mon_sys_env sensor
INSERT INTO ph_task
(
tk_name,
tk_type,
tk_group,
tk_description,
tk_result_table,
tk_create,
tk_execute,
tk_stop_time,
tk_start_time,
tk_frequency,
tk_delete
)
VALUES
(
"mon_sysenv",
"STARTUP SENSOR",
"SERVER",
"Tracks the database servers startup environment.",
"mon_sysenv",
"create table mon_sysenv (ID integer, name varchar(250), value lvarchar(1024))",
"insert into mon_sysenv select $DATA_SEQ_ID, env_name, env_value 
      FROM sysmaster:sysenv",
NULL,
NULL,
"0 0:01:00",
INTERVAL ( 60 ) DAY TO DAY
);

Once you have determined the SQL command you wish to execute and how often you want to run it, you can insert it into ph_task table in the sysadmin database.

Pre-installed monitors and sensors

IBM IDS 11 comes with some sensors and monitors created and started at server initialization. Working examples of the default tasks and monitors can be found in the $INFORMIXDIR/etc/sysadmin/sch_tasks.sql file. You can also see what monitors and tasks are installed with the new option for onstat onstat -g dbc.

The following monitors and tasks are predefined and installed in IBM IDS 11:

Table 3. Pre-installed sensors and monitors
Name Type Description
mon_command_historymonitorTrims the command history table. The monitor runs once a day, deleting entries older than 30 days.
mon_config_startupstartup sensorCollects information about the server ONCONFIG settings at startup time.
mon_configsensorCollects information about the server ONCONFIG changes since server startup.
mon_sysenvstartup sensorCollects information about the server environment at startup time.
mon_profilesensorCollects general profile information, including virtual processor ID, OS PID, VP class, number of threads in a wait or executing state, user, and system CPU.
mon_vpssensorMonitors VPU usage.
mon_checkpointsensorCollects checkpoint activity, including start time, duration, flush time, current logical log at checkpoint, logical log position and timestamp.
mon_memory_systemsensorTracks shared memory usage for IBM IDS. Specifically, it tracks if the type of segment is resident (type = 1), virtual (2), or message (3). Also, it reports the segment size in KB, used memory space in KB, and free memory.
mon_table_profileresult tableCollect SQL profile information by table/fragment. Index information is not collected here. Data such as parent part number, total pages, pages used, lock requests, number of uses on a table, and numbers of rows altered are kept.
mon_table_namessensorCollects table names.
mon_userssensorCollects user information, including SID, lock requests, lock waits, rollbacks, and rows/transactions processed.
check_backuptaskRuns an SPL procedure to make sure that backups have been taken.

Conclusion

As the volume of data continues to grow, so does the complexity of RDBMS systems. By combining the IBM Informix legacy utilities and autonomous administration into an administration framework, the IBM IDS database administration system continues to drive down the cost of ownership while increasing control.

Provided in this new feature are:

  • A facility to automate tasks so that the engine can handle mundane or unattended events
  • A means to gather a greater volume of user and system metrics
  • A centralized method to administer and review the system

For details on the syntax, see the Appendix section of this article. Then try the syntax out, and see for yourself how this new feature of IDS can make your life easier as a DBA.


Appendix: SQL administration API syntax guide

Administrative tasks are accomplished with the "task" or "admin" UDRs and are issued to the IBM IDS 11 server using the same methods you could otherwise use to deliver an SQL command.

The commands take the form of the command you want to execute with the required and optional arguments separated by commas. All commands and options are surrounded in quotes, except page sizes, part numbers, and dbspace numbers. File paths, raw spaces, offsets, and dbspace size are all quoted:
EXECUTE FUNCTION TASK ("command", "options");
EXECUTE FUNCTION ADMIN ("command", "options");

Example:

Listing 3. Task and admin UDRs
//To add 50 logical logs of 100 mb to the log space "logdbs", issue:
EXECUTE FUNCTION admin ("ADD LOG","logdbs","100M","50");

//Below, create a 2GB DB space "dbs1" of 4KB page size.
//Then add a 2GB chunk "dbs1a" with an offset of 2GB.
EXECUTE FUNCTION task "CREATE DBSPACE","dbs1","/dev/raw1/", "2GB","0KB",4);
EXECUTE FUNCTION task ("ADD CHUNK","dbs1a","/dev/raw1/","2GB","2GB");

Listed below are the supported SQL API commands and their options. All required arguments are documented in square brackets ([]). All optional arguments are shown with curly brackets ({}). For SQL API commands that have multiple distinct values, the list of accepted values is separated by a pipe (|).

Administrative commands:

  • ADD BUFFERPOOL [pagesize] {, # buffers} {, # lrus} {, maxdirty} {, mindirty}
  • ADD CHUNK [dbspace name] [, pathname] {, size} {, offset}
  • ADD LOG [dbspace name] {, size} {, num logs} {, "after_current"}
  • ADD MEMORY [size]
  • ADD MIRROR [dbspace name] [, chunk path] [, chunk offset] [, mirror path] [, mirror offset]
  • ALTER CHUNK OFFLINE [chunk path] [, chunk offset]
  • ALTER CHUNK ONLINE [chunk path] [, chunk offset]
  • ALTER LOGMODE [database name] {, "U" | "B" | "A" | "N"}
    • "U" changes the logmode to unbuffered mode.
    • "B" changes the logmode to buffered mode.
    • "A" changes the logmode to ANSI mode.
    • "N" changes the logmode to non-logging mode.
    • The default mode is the unbuffered log mode.
  • ALTER PLOG [dbspace name] {, size}
  • ARCHIVE FAKE
  • CHECK DATA [part number]
  • CHECK EXTENTS {dbspace number}
    • Providing a dbspace number limits the checking to just this dbspace.
    • Omitting it causes all dbspaces to be checked.
  • CHECK PARTITION [part number]
  • CHECKPOINT {"BLOCK" | "UNBLOCK" | "HARD" | "NORM"}
    • By default, this option will do a hard checkpoint.
  • CLEAN SBSPACE [sbspace name | "ALL"]
  • CREATE BLOBSPACE [blobspace name] [, pathname] {, size} {, offset} {, pagesize}{, first extent}{, next extent}
  • CREATE CHUNK [dbspace name] [, pathname] {, size} {, offset} {, mirror path} {, mirror offset} {, "with_check"}
  • CREATE DBSPACE [dbspace name] [. pathname] {, size} {, offset} {, pagesize} {, first extent} {, next extent}
  • CREATE SBSPACE [sbspace name] [, pathname] {, size} {, offset}
  • CREATE TEMPDBSPACE [dbspace name] [, pathname] {, size} {, offset}{, pagesize} {, first extent} {, next extent}
  • DROP BLOBSPACE [blobspace name]
  • DROP CHUNK [dbspace name] [, pathname] {, offset}
  • DROP DBSPACE [dbspace name]
  • DROP LOG [logical log number]
  • DROP SBSPACE [sbspace name]
  • DROP TEMPDBSPACE [dbspace name]
  • ONMODE :

    EXECUTE FUNCTION task ("onmode", "A","5000");

    Table 4. onmode parameters
    onmode parameter onmode description Tertiary parameters
    aIncrease shared memory segment size.Size in KB.
    cDo checkpoint. "block" | "unblock" | "hard" | "norm"
    CTune Btree scanner.start # |
    stop # |
    "high" |
    "low" |
    "threshold" |
    "alice " <mode>
    dHDR set DR server type."standard" |
    "primary", "<server name>" |
    "secondary", "<server name>"
    DSet the maximum PDQ priority allowed.An unsigned integer 0 to 100.
    eConfigure or flush shared statement cache."on" |
    "off" |
    "enable" |
    "flush"
    FFree, unused memory.
    jSwitch the server to administrative mode.
    lSwitch log.
    mSwitch the server to multiuser mode.
    MDecision support memory.Size in KB.
    nSet shared memory buffer cache to non-resident.
    OOverride space down blocking a checkpoint.
    pStart up or remove virtual processors of a specific class."+"|"-", "<number>", "<vpclass>"
    QMax number decision support queries.An unsigned integer 1 to 8,388,608.
    rLock memory resident.
    SSets the maximum number of decision support scans. This is the same as setting DS_MAX_SCANS.An unsigned integer 10 and 1,048,576.
    wfUpdate onconfig file and save file."<config_param>=<value>"
    where allowed configuration parameters are:
    LISTEN_TIMEOUT
    MAX_INCOMPLETE_CONNECTIONS
    MAX_PDQPRIORITY
    RESIDENT
    DS_TOTAL_MEMORY
    DS_MAX_QUERIES
    DS_MAX_SCANS
    DS_NONPDQ_QUERY_MEM
    ONLIDX_MAXMEM
    USELASTCOMMITTED
    ADMIN_MODE_USERS
    IFX_EXTEND_ROLE
    LOG_INDEX_BUILDS
    VP_MEMORY_CACHE_KB
    RTO_SERVER_RESTART
    RAS_PLOG_SPEED
    RAS_LLOG_SPEED
    AUTO_CKPTS
    AUTO_AIOVPS
    INDEX_SELFJOIN
    USE_BATCHEDREAD
    USE_KOBATCHEDREAD
    TEMPTAB_NOLOG
    EXPLAIN_STAT
    SORT_MERGE_SIZE
    SDS_TIMEOUT
    DISTR_QUERY_FLAGS
    wmUpdate onconfig memory."<config_param>=<value>"
    where allowed configuration parameters are the same as "wf," with the additional use of:
    LRU_AUTO_TUNING
    WSets statement cache parameters.
    YDynamic explain."<session ID> <2|1|0>"
    zKill user.Session ID number.
    ZKill transaction.Shared memory address.
  • PRINT ERROR [error number]
  • PRINT PARTITION ["FULL" | partnumber]
  • RENAME SPACE [old dbspace name] [, new dbspace name]
  • RESET SYSADMIN {,dbspace name}
    • Recreate the sysadmin database.
    • Optionally, supply a dbspace where the database will be recreated.
  • SCHEDULER START {number of threads}
    • Default number of threads is two.
  • SCHEDULER SHUTDOWN
  • SET CHUNK OFFLINE [chunk path] [, chunk offset]
  • SET CHUNK ONLINE [chunk path] [, chunk offset]
  • SET DATASKIP ON [dbspace name]
  • SET DATASKIP OFF [dbspace name]
  • SET SBSPACE ACCESSTIME ON
  • SET SBSPACE ACCESSTIME OFF
  • SET SBSPACE AVG_LO_SIZE [sbspace name] [, size]
  • SET SBSPACE LOGGING OFF [sbspace name]
  • SET SBSPACE LOGGING ON [sbspace name]
  • SET SQL TRACING INFO
  • SET SQL USER TRACING {session ID ,} ["ON" | "OFF" | "CLEAR"]
  • SET SQL USER TRACING CLEAR {session ID}
    • Default session ID is current session ID.
  • SET SQL USER TRACING OFF {session ID}
    • Default session ID is current session ID.
  • SET SQL USER TRACING ON {session ID}
    • Default session ID is current session ID.
  • SET SQL TRACING OFF
  • SET SQL TRACING ON {num traces} {, trace size} {, "LOW" | "MED" | "HIGH"}
    • For SQLTRACE options, the default values are 1000 for number of traces and 512KB for trace size.
  • SET SQL TRACING RESIZE [num traces {, trace size} {, "LOW" | "MED" | "HIGH"}]
    • If the trace size or trace level are not specified, the server will use the previously set levels.
  • START MIRRORING {dbspace name}
  • STOP MIRRORING {dbspace name}

Resources

Learn

Get products and technologies

  • Build your next development project with IBM trial software, available for download directly from developerWorks.

Discuss

Comments

developerWorks: Sign in

Required fields are indicated with an asterisk (*).


Need an IBM ID?
Forgot your IBM ID?


Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.

 


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name



The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.

 


All information submitted is secure.

Dig deeper into Information management on developerWorks


static.content.url=http://www.ibm.com/developerworks/js/artrating/
SITE_ID=1
Zone=Information Management, Autonomic computing
ArticleID=258676
ArticleTitle=Use the Informix Dynamic Server scheduler and SQL API
publish-date=09272007