On IDS version 11.10 a significant amount of focus has been given to automatically keep administrators up-to-date on database performance. Several performance advisory triggers added to Informix server to flag administrator once a threshold reached. The IDS will automatically calculate different thresholds based on current database activities and generate an appropriate performance advisory in message log (MSGPATH).
Following is a list of performance advisory available in IDS version 11.10:
- Physical log too small for RTO_SERVER_RESTART
- IDS server boot time too long
- Physical log too small
- Logical log too small
- Long transactions blocking checkpoints
- Physical log too small to accommodate bufferpool flushing
- Logical log too small to accommodate bufferpool flushing
- Physical log too small to accommodate automatic checkpoints
- Logical log too small to accommodate automatic checkpoints
- Bufferpool flushing
In this article we will discuss about the checkpoint related advisory. The ‘onstat -g ckp’ command has introduced with IDS version 11.10 to get checkpoint information and checkpoint related advisory. This command will display checkpoint related configuration parameters recommendation, once a sub-optimal configuration detected.
Following configuration parameters can significantly impact performance of IDS:
During checkpoint IDS will evaluate checkpoint related configuration parameters and produce a performance advisory once a sub-optimal configuration detected. If configuration parameters is not tune properly, database server can encounter block during checkpoint.
Following is an example of ‘onstat -g ckp’ command output:
The description of ‘onstat -g ckp’ command output:
- The first section of the display describes checkpoint information
- The second section of ‘onstat -g ckp’ output describe the checkpoint related advisory
In the above example physical log file size (PHYSFILE) was set to 15000 Kbytes. During checkpoint IDS evaluate the configuration parameters and found out physical log size was small, which can cause checkpoint block, and suggested PHYSFILE value as 16800 Kbytes.
The checkpoint performance advisory also available in the message log file. Following is an example of message log with a checkpoint performance advisory:
In IDS Cheetah release, you can encrypt communication between an HDR pair, to secure the transmission of data over unsecured networks, including the internet. After you enable encryption, the HDR primary encrypts the data before sending it to the HDR Secondary server. The HDR secondary server decrypts the data. Use new ONCONFIG parameter ENCRYPT_HDR to enable encryption between the HDR pair. You can also customized encryption using following parameters.
|Configuration Parameter ||Default value ||Comments/Description |
|ENCRYPT_HDR ||0 ||0 - disable, 1 - enable HDR encryption |
|ENCRYPT_CIPHERS ||allbut:<ecb> || Defines ciphers and modes that can be used by the current database session. |
The following ciphers are currently supported:
- des (64-bit key), des3 (Triple DES), desx (Extended DES, 128-bit key)
- aes/aes128 (128-bit key), aes192 (192-bit key), aes256 (256-bit key)
- bf-1 Blowfish (64-bit key), bf-2 (128-bit key), bf-3 (192-bit key)
|ENCRYPT_MAC ||medium ||Controls the level of message authentication code (MAC) generation. |
- off - does not use MAC generation.
- low - uses XOR folding on all messages.
- medium - uses SHA1 MAC generation for all messages greater than 20 bytes long
and XOR folding on smaller messages.
- high - uses SHA1 MAC generation on all messages.
|ENCRYPT_MACFILE ||builtin ||A list of the full path names of MAC key files. |
|ENCRYPT_SWITCH ||60,60 ||Defines the frequency (in minutes) at which ciphers, secret keys are renegotiated. |
- To use your own MAC key file
- Execute the following command to generate MAC Key file.
$INFORMIXDIR/bin/GenMacKey -o /usr/informix/etc/MacKey1.dat
- Copy MacKey1.dat over to the paired server
- Update ENCRYPT_MACFILE configuration parameter on both the servers as shown below
- NOTE - HDR and Enterprise Replication (ER) share the same encryption configuration parameters: ENCRYPT_CIPHERS, ENCRYPT_MAC, ENCRYPT_MACFILE and ENCRYPT_SWITCH.
SQLTRACE part 2
Today we will now cover part 2 of the new SQL tracing facility inCheetah. In part 1 we discussed how to enable SQLTRACE and how tomonitor it via onstat -g his. I'm sure you noticed that while onstat -ghis provides a lot of very useful information, you would need someserious awk and sed skills to script it, for example, that you only sawqueries that had an average execution time of 1 second or more.Fortunately SQL tracing is set up in the sysmaster database. The tableis calledsyssqltrace
This table allows you to quickly get subsections of the SQL traceoutput. So, for example, you can find out how many queriestookmore than 1 second to run by executing the following query:select count(*) fromsyssqltrace where sql_totaltime >1;
You can obviously drill down and get additional information includingthe actual queries that were run that took more than 1 second.
The table syssqltrace is not the only table, however, that coversinformation regarding SQL tracing. The table syssqltrace_iter
provides information in the form of an iterator tree for each sql. This allows you to know what part of your plan took the mosttime to run.
These two tables allow you to use the information gathered from SQLtracing, and will allow you to generate reports, and diagnose problemSQL more efficiently.Mark Jamison
feature was actually introduced in IDS starting with 10.00.xC4 version but with restrictions and one of the restrictions was it did not work on Enterprise Replication
replicated tables. But starting with Cheetah release this will be supported even on the replicated tables. This feature introduces TRUNCATE TABLE as a supported operation on a replicated table while replication is active, which basically eliminates need for dropping and redefining the replicate and saves time for DBAs. This feature will support TRUNCATE TABLE on local table only, meaning ER will not propagate the TRUNCATE operation to other participants
. The TRUNCATE command needs to be issued on each participant as needed. There are plans to implement the propagation of command in future along with other alter operations(ALTER, RENAME).
One of the key uses of TRUNCATE support which is in conjunction with cdr sync
is, if there is a significant difference between the source and the target, the customer might elect to perform a truncate table followed by a cdr sync command. This would be faster than performing a cdr repair
because the cdr repair must scan the data pages in primary key order while the sync does not. The cdr sync/repair features were introduced in 10.x which basically provided the ability to recognize the differences between tables, between multiple nodes within the replication domain, while ER running and take necessary action as instructed.Vijay Lolabattu
The latest Beta drop of IDS Cheetah
is now available, with some important new features integrated into the code line..
New features available in Beta drop 5 include:
- AIX 64-bit platform build available for download
- Web Feature Service for Spatial Data - Delivers spatial information as a web service for your Service Oriented Architecture
- Label Based Access Control (LBAC) - Control data access at a row and column level using security labels
- Continuous Availability with multiple HDR remote secondary servers
The following Infocenter page has a complete list of new features for each Beta drop: http://publib.boulder.ibm.com/infocenter/idshelp/v111/index.jsp?topic=/com.ibm.po.doc/newfeatures_11.10_beta.html
From IDS 10.0x onwards, Enterprise Replication
(ER) supports alteroperations on a replicated table while replication is active however, RENAME
support was not one of them but that changes in Cheetah. IDS 11.x willstart supporting RENAME on ER columns, tables and databases. This featuresimplifies DBA tasks and increases data availability. Without this feature, theDBA would have to plan on a time in which ER could be removed from the objectso that the rename could be performed. This would require advanceplanning and scheduling during non-peek hoursfor performing the rename.
When a RENAME operation is performedaffected replicate definitions will be updated to reflect the impact of theRENAME and a control message will be sent to the other servers informing themof the rename. RENAME operation is allowed onlyon a mastered replicate
. It does not propagate the RENAME command itself, there are plans to implement that too inthe future. The user simply will issue a rename DDL statement on each of theservers that are affected by RENAME. If therename is a column or a table, then the replicate will be cycled (cdr stop/start replicate) in much thesame way that occurs with a table ALTER
. If a database rename occurs,however, ER will be cycled (cdr stop/start). This cycling will only occur if the renameaffects something on the local node. In all cases, the rename will causea control message to be sent to the other servers within the replication domainso that the syscdr database
is correctlyupdated.Vijay Lolabattu
One of the new features in Cheetah is the ability to enable tracing for SQL. While this has been available in limited method though the use of the IBM product I-SPY, the drawback before has always been the performance hit to actually monitor this information. That limitation is effectively eliminated with this feature in Cheetah. So how is SQL Tracing enabled? It's actually pretty simple, one way is to just put SQLTRACE in your ONCONFIG. One configuration I use most often is:
This allows me to keep track of the last 1000 sql statements on the instance.
However ONCONFIG is not the only way to enable tracing, you can also dothis through the sysadmin database by running the following:
EXECUTE FUNCTION task("set sql tracing on", 1000, 2,"low","global");
So by using either of the above methods you now have enabled SQLRACE, buthow do you validate it. The easiest method is by running :
onstat -g his
Below is a sample of onstat -g his output.
In Part 2 we will look at how to read this in SQL, and possible optionsfor the data.
This article describes how IBM Informix DB Scheduler tool automatically keeps administrators up-to-date on database performance and availability and execute corrective actions automatically.
What is DB Scheduler?
The DB Scheduler is a self-managing technology simplifies database administration and lets administrator concentrate on other business issues. Current 24/7 support environment where each minutes counts, this self-managing technology can significantly reduce the database downtime by sending alerts prior to a problem occur.
The DB Scheduler manages and executes scheduled maintenance, monitoring, and administration tasks. This tool enables administrator to monitor activities (for example, space management, automatic backup etc. ) and create corrective actions that run automatically.
The DB Scheduler comprise of Tasks and Sensors.
- TASK: Task is a means to execute specific job at a specific time or interval. There are some tasks that run only once when the database server starts, which call ‘Startup Task’. A task executed by invoking one of following manners:
- A single or compound SQL statement
- Stored procedure
- C User Defined Routine
- Java User Defined Routine
- SENSOR: Sensor is a specialized TASK for collecting and saving data without database administrator’s interventions. It provides a simple way of collecting information. There are some sensors that run only once when the database starts, which call ‘Startup Sensor’. Sensor is a way of ensuring routine jobs get completed and periodically check; analyze collected data to ensure the data server operating efficiently.
The DB Scheduler is defined and driven by tasks. A new database ‘sysadmin’ has introduced starting with IDS v.11 for DB Scheduler. The sysadmin database has five various tables which contain and organize different properties of DB Scheduler. Only members of the DBSA group are granted access to connect to the sysadmin database. By default, user informix is granted access to the sysadmin database. Following are list of tables in sysadmin database:
There are two parameters available for use with create task and sensor. Values of these parameters can be used within SQL statement for storing and tracing data in sysadmin database.
- $DATA_SEQ_ID – The number of times task has been run. Each time a task executed the sequence id increase by 1
- $DATA_TASK_ID – A unique task id for each task.
The SQL statement is checked for two reserve strings $DATA_SEQ_ID & $DATA_TASK_ID. If those are found inside a SQL statement than the current sequence id is replace the $DATA_SEQ_ID and the current task id replaces $DATA_TASK_ID.
How to create a new Task?
A task can be created by inserting a row in the ph_task table. The ph_task table lists all tasks and contains information about how and when the database server will execute each task.
When the database server executes a task, the server invokes the SQL object contained in the pk_execute column of the ph_task table. The pk_execute column can contain a user-defined function, a single SQL statement, or a multiple-statement.
Prior to creating a new task we need to plan on following:
- A description of the task
- SQL object, stored procedure, or a function to execute the objective of the task
- Information on when and how often task needs to run
- The SQL statement to insert a new row into ph_task table to create task
The following is an example of create task that runs once a day at 4:00 A.M. to ensure that table ‘orders’ contains only last one month of data:
The same above task can be creating using ph_threshold table for order retention policy information. An associated row must be inserted in ph_threshold table for 'DELETE ORDER RECORDS'. Modify the order retention period by changing information in the 'DELETE ORDER RECORDS' row in the ph_threshold table. The 'DELETE ORDER RECORDS' parameter sets the length of time rows should remain in the order table. Following is an example of create task using ph_threshold table:
How to create a Sensor?
Sensor is a specialized TASK for collecting and saving data. Sensor can be created same way as task by inserting a row in the ph_task table, with some additional information.
Prior to creating a new sensor we need to plan on following:
- A description of the sensor
- The result table to save collect information
- SQL object, stored procedure, or a function for collect information
- Information on when and how often sensor needs to run
- The SQL statement to insert a new row into ph_task table to create sensor
Following is an example of sensor that collects information on database memory usage and stores information in a result table called ‘mon_memory_system’. In case the result table does not exist, it creates the table. In the example sensor runs every 30 minutes, and also deletes data from ‘mon_memory_system’ table older than 30 days:
The DB Scheduler has some built in sensors and tasks, which can be use for different scheduled maintenance, monitoring, and administration. Following is a list of built in sensors and tasks:
How to stop the DB Scheduler?
The DB Scheduler is a useful feature. However, if it needs to stop for some reason, the recommended way is use the task() routine. Following SQL can be use for shutdown DB Scheduler:
DATABASE sysadmin; EXECUTE FUNCTION task("scheduler shutdown"); CLOSE DATABASE;
How to limit space utilize by DB Scheduler?
The DB Scheduler needs additional disk space for historical data. Sensors collect and save information in results table at sysadmin database. Following formula can be use for estimate the disk usage for each sensor:
(Number of rows collected * Size of the row collected * The frequency of data collection per day * The retention period)
By disabling sensors one can prevent collect and save historical information. However, disabling sensors can effects adversely other database server functions. It is always better to update the ph_task table in sysadmin database to limit the amount of data collect by a sensor than disable sensors. Following is an example that tells the DB Scheduler to delete data older than 4 times the collection frequency:
DATABASE sysadmin; UPDATE ph_task SET tk_delete = (4 * tk_frequency) WHERE tk_type = 'SENSOR';
If you use Enterprise Replication(ER), you can use onstat -g rqm to print the statistics and contents of the low-levelqueues managed by the Reliable Queue Manager(RQM). From IDS 10.00.xC7 onwards, a new option to print information aboutthe sbspaces configured for ER is available.
onstat -g rqm SBPACES prints detailed statistical information about the sbspaces configured for CDR_QDATA_SBSPACE.A sample output is shown below:
$ onstat -g rqm SBSPACES
IBM Informix Dynamic Server Version 11.10.FC1 -- On-Line -- Up 00:18:03 -- 67584 Kbytes
RQM Space Statistics for CDR_QDATA_SBSPACE:-------------------------------------------name/addr number used free total %full pathname0xda3d088 2 1 4586 4587 0 ./replsbs replsbs 2 1 4586 4587 0
0xda539c0 3 1 4586 4587 0 ./sbsp1 sbsp1 3 2 9172 9174 0
As you can see it prints the address of the chunk or name of the sbspace, chunk/space number, total pages, free pages,percentage used and path name for the chunks. The information printed is for each chunk in a sbspace followed bysbspace itself. In this case replsbs and sbsp1 are the sbspaces configured for CDR_QDATA_SBSPACE. Suma Vinod
OpenAdmin Tool for IDS, the PHP-based administration console for IDS, is now available for downloadfrom the Free Product download site.
Updated 2/14/08 with new recommended PHP and XAMPP versions which are good for Cheetah and Cheetah 2.
Note: This article was updated for the latest version of OpenAdmin Tool on July 30 2007.
Installation of OpenAdmin can appear complex - this article provides anexample of one way of installing OpenAdmin. This example uses an XAMPP package on Windows. XAMPP is a bundle of open-source utilities (such as Apache, PHP and PEAR) thatare used by OpenAdmin. Installing XAMPP simplifies the OpenAdmin install,but many other installation and configuration options are possible.
High-level Summary of Install steps
Detailed steps are provided below. This is the high-level sequence of stepsyou will follow:
1. Install Informix Client SDK or Connect
- Install CSDK or Connect 3.00.TC1 (or later)
- Download and extract the XAMPP 1.6.4 zip file
- Run XAMPP setup
- Edit the php configuration file
- Download and extract the OpenAdmin package
- Start the Apache Webserver
- Install and configure OpenAdmin
If not already installed, install the Informix Client software. Rememberwhere you install it; the INFORMIXDIR value will need to be set as an environment variable when you start thewebserver. You can set INFORMIXDIR in the system environment using Control Panel->System. (Note: CSDK or Connect 3.00.TC1 or later is recommended.)2. Download XAMPP and extract files.
- Unzip XAMPP to the destination directory of your choice. (In this example we'll assume the zip was extracted to c:\, which would put the XAMPP software in c:\XAMPP\.)
3. Run XAMPP setup
cd to the XAMPP directory and run setup_xampp. When the batch file finishes running, press any key to continue.
4. Update the php configuration file (php.ini)
Navigate to the xampp\apache\bin subdirectory, and open the php.ini file in a text editor. (Note: the XAMPP package contains more than onephp.ini file - the correct one is in xampp\apache\bin (i.e., c:\xampp\apache\bin\php.ini).)
Edit php.ini to enable the SQLITE and Informix PDO drivers:
- Locate the following lines and remove any comment indicators from in front of them:
- Add the following line to the file:
5. Download and Extract the OpenAdmin zip file.
- Download the OpenAdmin zip file from the Open Source download site to a temporary location.
- Create a new directory under c:\xampp\htdocs to contain the zipped files (For example: c:\xampp\htdocs\openadmin)
- Unzip oatids.zip to this new directory.
6. Start the Apache Webserver.
- Verify INFORMIXDIR is set to the Informix Connect or CSDK directory.
- Run c:\xampp\xampp-control.exe.
- Start the Apache Webserver.
7. Install and Configure OpenAdmin.
Go to http://localhost/openadmin/install and follow the installation instructions. After the first install screen a check will be made to ensure you have the correct PDO drivers:
Enter the base URL for your machine, language defaults, and optionallythe Google maps key for your machine URL:
8. Start using OpenAdmin
Once installation is complete you can go to the main page (http://localhost/openadmin in this example), click Admin and start adding IDS 11.10 connections. Important: If you are using a version of CSDK prior to 3.00.TC1, there must be aSQLHOSTS entry on the Webserver machine for each connection you add (use setnet32 to add Informix servers to SQLHOSTS on Windows).
Once a connection is defined you can return to the main screen and connect:
The connection administration allows you to create groups of connections, and assign a password for each group, making it easier to administer a large number of instances.
, Guy Bowerman
There is good news for those who were downloading the Node.1.0 DataBlade
from the developer works
and using it AS-IS. As per policy there is no support for AS-IS software downloaded from the developer works.
Node DataBlade creates an Informix DataBlade with a node opaque type data that addresses the problem of transitive closure/hierarchical data queries.
From IDS V11.10 onwards, Node.2.0 DataBlade (with some added functionality) is being bundled as one of the default DataBlades along with the server. That means moving forward Node.2.0 DataBlade (and subsequent higher versions) would be eligible for support. Anup Nair
I don’t know about the rest of you, but I have had a lot ofproblems truly determining if my SQL Cache settings are efficient. If my UserDefined Routine (UDR) cache, for example, always seems to be 50% full, does that mean thecache only uses 50% and so the settings are to high, or are they at 50% becauseI’m constantly cleaning the pools, and thus I’m configured too low. Beginning in Cheetah you now have a means to diagnose thattype of information, and it’s located in the sysmaster database. The table name is syssqlcacheprof, and it contains profileinformation for each of the caches.
Below is a sample output from this table:
As you can see in the above you can now quickly identify how often the cache is removing entries (orcleaning), and what the efficiency is of your cache, in terms of a hit rate. Infact a very simple query that could be used for analyzing the hit ratio for the caches would be:
select *, (hits/(hits+misses)) hit_ratio from syssqlcacheprof
This table also provides youthe benefit of quickly seeing how much memory each cache is actually using,something that before Cheetah was a bit awkward to calculate. While not a major feature of Cheetah, this new sysmaster table is a valuable new asset in performance tuning.
Welcome to the IDS Experts Blog, an international team blog authored by IBM software engineers. These experts from the Informix Technical Support and R&D labs will be posting about a variety of topics related to Informix and IDS. A popular topic over the next few months is likely to be the 11.10 release of Informix Dynamic Server currently in Open Beta
If you have suggestions on articles you'd like to see from support or development engineers please add a comment to the blog. Your feedback to this blog goes directly to engineers working on Informix products.[Read More