A new version of OpenAdmin Tool for IDS
(formerly known as IDSAdmin) is available to download from the IDS 11.10 Open Beta site
(sign-in required). This new version has a simpler installation, new graphics, and many new features.
Please keep in mind that:
- The product is still listed as IDSAdmin on the Beta download site.
- The Readme file downloadable from the Beta site is (at the time of writing) the out of date one from March - do not use it, instead unzip the oatids package and refer to the Readme in there.
Here is a copy of the current Readme:
OpenAdmin Tool For IDS - v2.10- July 2007
OpenAdmin Tool For IDS ( OAT ) , is a PHP based administration consolewhich can be used to administer one or more IBM Informix Dyanamic Server11.10 instances.
DependenciesOAT requires the following products to be installed:
Note: the versions in brackets indicate the versions that OAT has been testedwith.
- A Webserver (Apache 2.2.3)
- IBM I-Connect or CSDK (3.00)
- PHP 5 compiled with PDO, PDO_SQLITE, GD and SOAP enabled. (5.2.2)
- Informix PDO Module.
Installation Instructions1. Install and set up a working web server that has been configured toserve php pages.
(For more information see the NOTES section below.)
2. Update the php configuration file.
3. Install IBM I-Connect or Client SDK.
- edit the php.ini
- add two new lines to the 'extension' section of the configuration fileif they are not present:
- modify the parameter memory_total to 256M.
4. Install the OpenAdmin Tool For IDS package.
5. Change the ownership of the "<OAT>/install" directoryto the user and group that runs the apache ( httpd ) server.
- Extract the OAT package into your web server document root directory.
6. Start the webserver, making sure INFORMIXDIR points to the Client SDKor I-Connect install location in the webserver environment.
- Find the user and group that runs the apache ( httpd ) server from the httpd.conf.
- chown <user>:<group> <OAT>/install ( Unix/Linux )
7. Launch the OAT installer by visiting the web page:
SERVERNAME = the name of your machine
LOCATION = where you extracted the tar file (e.g. oat )
Follow the instructions in the OAT installation screens, including obtainingan optional Google Maps API key for your domain if required.
Once installation is complete, point your browser at the OAT root URL.From there you can click on Admin and add a new IDS 11.10 Connection tothe default group or create a new group. If using an I-Connect or ClientSDK version prior to 3.0 then for each new connection you add, there needsto be a corresponding SQLHOSTS entry for that connection on the webservermachine.
Once a connection is created the "Get Servers" link from theOAT root page will retrieve the list of connections you have created.
Installing an Apache webserver with PHP is not a straight forward task, thankfully there are 3rd party solutions available. A popular choiceis XAMPP.
Using XAMPPXAMPP is an easy to install Apache distribution containing MySQL , PHPand Perl. XAMPP can be obtainedfrom,
The XAMPP version 1.6.2 for Windows contains everything you need to runOAT (with the exception of IBM I-Connect or CSDK ), download and installationinformation is available at
XAMPP ( also called as LAMPP ) is also available for Linux, however you will need to download the 'Development Package' in order to compile the Informix PDO Driver. Download and installation information is available at
Informix PDO DriverThe following developerWorks article contains useful information to assistwith building an Informix PDO driver:
In IDS 11.10, all optimizer directives including external directives can be used in queries using ANSI joins.Using
optimizer directives might enable you to force a favorable query execution path thus improving performance.
Here is a description of the different optimizer directives and how they work with ANSI join queries.
|Optimization Goal Directives||FIRST_ROWS, ALL_ROWS||Directives enabled for ANSI join queries|
|Access Method Directives||FULL, INDEX, AVOID_INDEX, AVOID_FULL,INDEX_SJ, AVOID_INDEX_SJ||Directives enabled for ANSI join queries|
|Explain-Mode Directives||EXPLAIN, AVOID_EXECUTE||Directives enabled for ANSI join queries|
|External Directives||Directives enabled for ANSI join queries|
|Join Method Directives||USE_HASH, USE_NL, AVOID_HASH, AVOID_NL, /BUILD, /PROBE||Directives enabled for ANSI Inner Joins|
Directives not enabled for ANSI Inner Joins in views
Directives enabled for ANSI Outer Joins that can be transformed into ANSI Inner joins
Directives disabled for all other ANSI Outer Joins
|Join Order Directives||Ordered||Directives enabled for ANSI Inner Joins|
Directives enabled for ANSI Left Outer joins if it doesn't conflict with the ordering requirement for Outer Joins
Directives disabled for all other ANSI Outer Joins
luster for H
igh Availability(MACH11) is a code name for the new feature called ContinuousAvailability introduced in Informix Dynamic Server (IDS) 11. This new feature significantly expands the highavailability options in IDS to provide increased failover, capacity,flexibility, and scalability.
Traditionally, IDS has provided multiple robust solutions forsupporting high availability data replication options. Previousreleases of IDS have supported two replication technologies: EnterpriseReplication (ER) and High Availability Data Replication (HDR). By usingthese technologies together, customers are able to achieve very highlevels of data availability. Both replication technologiescan be integrated with each other and coexist with other availabilitysolutions such as disk mirroring.
While HDR and ER have been features of IDS for many years, and haveproven to be highly reliable and low-maintenance technologies. IDS nowadds support for two new types of secondary servers:
- Secondary servers that share the same physical disk. TheShared Disk Secondary (SDS) servers provide increased availability byallowing one or more instances of the IDS server to attach to the samedisk subsystem, providing redundancy for the server in addition to dataredundancy solutions
- Additional remote secondary servers. The Remote StandaloneSecondary (RSS) servers extend HDR to provide multiple local or remotebackup servers that also replicate the data.
Both SDS and RSS servers provide customers a way to obtain increasedcapacity by distributing workload across multiple servers.Customers canchoose any of these solutions on their own. They become even morepowerful by combining all three types of topologies together. AddingEnterprise Replication, a completely customized availability solutioncan be delivered to meet each unique availability requirement.. Theseconfigurations are simple to set up and maintain, and are highlyscalable.
The following image depicts a Continuous Availability solution with 3tier protection. For example, if the Primary in Building-A inNew Orleans went down for some reason the role of the primary caneasily be switched to one of the SDS servers running on the bladeserver in Building-B. This would cause all other secondaryservers to automatically connect to the new primary server. If both the servers in New Orleans died, Memphisbecomes your Primary and Denver can be made HDR secondary and you mayeven add some SDS servers to Memphis blade server for load balance.
To learn more about this feature, please read the following white paperwritten by the architects of this feature:Whitepaper on MACH11 technology.Vijay Lolabattu
Derived tables in the FROM Clauseof Queries
TheSELECT statement can now include syntax that complies with ISO/IEC9075:1992, the SQL-92 standard, to specify a full select subquery inthe FROM clause as a data source for the query. These subqueries arecalled derived tables or table expressions, they can be simple, UNION,or joined subqueries, including OUTER joins, and can include the ORDERBY clause. In addition, AS correlation specifications in the FROMclause can declare temporary names for columns within the query.Informix-extension syntax, such as the FUNCTION keyword with iteratorfunctions or the TABLE (MULTISET (SELECT ...)) keywords forcollection-derived tables, can now be replaced in the FROM clause bySQL-92 syntax. This feature expands the capability of Informix DynamicServer to run without modification queries that are interoperable onother database servers that support industry-standard SQL syntax.
You can find examples using derived tables in the demo directory:$INFORMIXDIR/demo/dbaccess/demo_ud/sel_sql99.sql.
Optimizer Directives in ANSI-Compliant Joined Queries
Earlier IDS versions supported optimizer directives inInformix-extension joined queries, but not in queries that usedANSI/ISO syntax to specify joins. For both inline directives andexternal directives, this release extends support in ANSI/ISO joinedqueries to the following classes of optimizer directives:
- Access-method directives (FULL, AVOID_FULL, INDEX,AVOID_INDEX, INDEX_SJ, AVOID_INDEX_SJ)
- Explain-mode directives (EXPLAIN, AVOID_EXECUTE)
- Optimization-goal directives (ALL_ROWS, FIRST_ROWS).
The join-order directive (ORDERED) is supported only inANSI/ISO-compliant LEFT OUTER joins and INNER joins. Because ofordering requirements for OUTER joins, in ANSI-compliant joined queriesthat specify the RIGHT OUTER JOIN or FULL OUTER JOIN keywords, theORDERED join-order directive is ignored, but it is listed underDirectives Not Followed in the sqexplain.out file.
This feature does the not support the join-method directives (USE_NL,AVOID_NL, USE_HASH, AVOID_HASH, /BUILD, and /PROBE) in ANSI/ISO joinedqueries, except in cases where the optimizer rewrites the query so thatit is no longer uses the ANSI/ISO syntax.Trigger Enhancements
Several new features expand the syntax and the functionality oftriggers on tables and on views:
- You can now define multiple INSERT, DELETE, UPDATE, andSELECT triggers on a table and multiple INSTEAD OF triggers for theview.
- When a table, view, or column list has multiple triggersfor a DML event type, Informix Dynamic Server executes all BEFOREtriggered actions before the FOR EACH ROW actions, and executes all FOREACH ROW actions before the AFTER actions.
- You can create SPL procedures that refer to applicable OLDand NEW trigger correlated values. Within the procedure you can accessapplicable OLD and NEW values and modify the NEW values: e.g. using LETstatements. From a FOR EACH ROW trigger action, you can execute thisSPL procedure [syntax: execute procedure foo() with trigger references].
- New Boolean operators (DELETING, INSERTING, SELECTING, andUPDATING) can be used in procedures executed from trigger actionstatements. These test whether the currently executing triggered actionwas triggered by the specified type of DML event and return a booleanvalue. The IF statement of SPL and the CASE expression of SQL canspecify these operators as the condition in a trigger routine.
These features make it easier to incorporate IDS triggers on tables andon views within a heterogeneous information management system wheremultiple applications need to share the table or view.
You can find examples using multiple triggers in the demo directory:$INFORMIXDIR/demo/dbaccess/demo_ud/cr_trig.sql.Index Self-Join Query Plans
Inearlier Informix Dynamic Server versions, queries of tables withcomposite indexes performed inefficiently if the ratio of duplicatevalues to the number of distinct values was much higher for the leadingcolumns than for subsequent columns of the index. A new feature of thequery optimizer supports a new type of index scan, called an indexself-join path, that uses only subsets of the full range of a compositeindex. The table is logically joined to itself, and the more selectivenon-leading index keys are applied as index bound filters to eachunique combination of the leading key values. By default, the optimizerconsiders this type of scan.
The optimizer also supports two newjoin-method directives, INDEX_SJ and AVOID_INDEX_SJ. The INDEX_SJdirective forces an index self-join path using the specified index, orchoosing the least costly index in a list of indexes, even if datadistribution statistics are not available for the leading index keycolumns. The AVOID_INDEX_SJ directive prevents a self-join path for thespecified index or indexes. This feature can improve query performanceon tables with composite indexes.Enhanced Concurrency withCommitted Read Isolation
In Committed Read isolation level, exclusive row-level locks held byother sessions can cause SQL operations to fail when attempting to readdata in the locked rows. This release introduces a new LASTCOMMITTED keyword option to the SET ISOLATION COMMITTED READ statementto reduce the risk of locking conflicts when attempting to read atable. This new syntax instructs IDS to return the most recentlycommitted version of the rows, even if another concurrent session holdsan exclusive row-level lock. This behavior can be extended to the DirtyRead, Read Uncommitted, and Read Committed isolation levels by settingthe new USELASTCOMMITTED configuration parameter or through new optionsto the SET ENVIRONMENT statement.
This feature supports B-tree indexes and functional indexes, but notR-tree indexes. It does not support tables that are being accessed byDataBlade modules, tables with columns of collection data types, tablescreated using a Virtual Table Interface, tables with page-levellocking, tables with exclusive table-level locks, unlogged tables, ortables in databases with no transaction logging.
Enhanced Data Types andUDR Support in Cross-Server Distributed Queries
Earlier releases of IDS restricted the remote execution of UDRs indatabases of other IDS instances to SPL routines that the EXECUTEFUNCTION or EXECUTE PROCEDURE statement invoke explicitly, and to SPLroutines that queries and other DML operations invoked implicitly.
This release extends support for UDRs in cross-database andcross-server distributed operations to most contexts where a UDR isvalid in the local database. In addition, external routines written inthe C or Java languages are now valid in any distributed operationwhere an SPL routine is valid. This release also extends thedata types that are valid as parameters or return values ofcross-server UDRs, which were formerly restricted to non-opaquebuilt-in SQL data types, by supporting these additional data types:
– DISTINCT of built-in types that are not opaque
– DISTINCT of BOOLEAN
– DISTINCT of LVARCHAR
– DISTINCT of the DISTINCT types listed above.
These data types can be returned by SPL, C, or Java language UDRs thatuse these data types as parameters or as return values, if the UDRs aredefined in all the participating databases. Any implicit or explicitcasts defined over these data types must be duplicated across all theparticipating Dynamic Server instances. The DISTINCT data types musthave exactly the same data type hierarchy defined in all databases thatparticipate in the distributed query.
This feature does not relax existing restrictions on other opaque andDISTINCT types or on large-object, serial, and collection data types inlocally or remotely executed SPL routines or external routines.
- Automaticstatistics collection.
IDS now automatically collects index statistics, equivalent to thestatistics gathered by UPDATE STATISTICS in LOW mode, when you create aB-tree index on a UDT column of an existing table or if you create afunctional or VII index on a column of an existing table. Statisticsthat are collected automatically by this feature are available to thequery optimizer, removing the necessity to manually run UPDATESTATISTICS. When B-tree indexes are created, column statistics arecollected on the first index column, equivalent to what UPDATESTATISTICS generates in MEDIUM mode, with a resolution of 1% for tablesof fewer than a million rows, and 0.5% for larger tables.
- Query statistics in EXPLAIN output.
You can now view statistics about completed queries in the new Querystatistics section in SET EXPLAIN. The querystatistics section of the output gives a detailed information on eachscan, join and sort statistics such as estimated cost, number of rowsscanned, estimated number of rows and number of rows produced at eachlevel. The EXPLAIN_STAT configurationparameter enables or disables the inclusionof a Query Statistics section in the explainoutput file. You can generatethe output file by using either the SETEXPLAINstatement of SQL or the onmode-Y sessionid
command. Following is an example.
- New SAMPLING SIZE option in UPDATE STATISTICS MEDIUM mode.
In explicit UPDATE STATISTICS operations in MEDIUM mode, a new SAMPLINGSIZE option in the Resolution clause can specify the minimum number ofrows to sample for column distributions.
If the Resolution clause omits the RESOLUTION keyword and specifies noconfidence level and no percent value, then the number of rows thatDynamic Server samples will be the larger of the following two values:
* The min value that you specifyimmediately after the SAMPLING SIZE keywords
* The sampling size that is required forthe default percent of rows in each bin (2.5%) and for the minimumconfidence level (0.80).
If a sampling size is specified in a Resolution clause that includesexplicit values for both the average percent of sampled rows per binand for the confidence level, then the number of sampled rows will bethe larger of these two values:
* The min value that you specifyimmediately after the SAMPLING SIZE keywords
* The sampling size that is required forthe specified percent of rows and for the specified confidence level.
If a sampling size is specified in a Resolution clause that includes anaverage percentage value but sets no confidence level, then the minimumconfidence value of 0.80 is used to calculate the actual sampling sizefor Dynamic Server to use if the specified size is smaller.
For example, the following statement calculates statistics for twocolumns of the customer table, without updating index information. Atleast 200 rows will be sampled, but the actual size of the sample mightbe larger than 200 if more rows are required to provide the default0.80 confidence level for a sample distribution that uses 50equivalence categories, with an average percentage of 2% of the sampledvalues in each bin.UPDATESTATISTICS MEDIUM FOR TABLE customer (city, state) SAMPLING SIZE 200 RESOLUTION 2 DISTRIBUTIONS ONLY;
Whether or not you include an explicit SAMPLING SIZE specification inthe Resolution clause, Dynamic Server records in the system catalog theactual sampling size (as a percentage of the total number of rows inthe table) at the time of MEDIUM mode UPDATE STATISTICS creation.
- New FILETO option to redirect SET EXPLAIN output to a filethat you desire.
When you execute a SET EXPLAIN FILE TO statement, explain output isimplicitly turned on. The default filename for the output issqexplain.out until changed by a SET EXPLAIN FILE TO statement. Oncechanged, the filename remains set until the end of the session or untilit is changed by another SET EXPLAIN FILE TO statement.
The filename can be any valid combination of optional path andfilename. If no path component is specified, the file is placed in yourcurrent directory. The permissions for the file are owned by thecurrent user. The output file that you specify in the SETEXPLAIN statement can be a new file or an existing file. In DynamicServer, if the FILE TO clause specifies an existing file, the newoutput is appended to that file. Following is an examplewhere the explain output is sent to file "explain.vijay" under "/tmp"directory.SETEXPLAIN FILE TO '/tmp/explain.vijay'
These features reduce the need for DBAs to perform UPDATE STATISTICS onindexes, enabling better query plans and ease of use.
Onbar backup order on Cheetah
1> Root dbsapce , SBLOBspace, BLOB space, and normal dbspace onbar backup order.
Rootdbspace is always first, then SBLOB space, BLOB space, normal dbspace is the last one.
Onstat –d output as following:
IBM Informix Dynamic Server Version 11.10.UC1 -- On-Line -- Up 00:19:47 -- 44672 Kbytes
address number flags fchunk nchunks pgsize flags owner name
44cdf7f0 1 0x60001 1 1 2048 N B informix rootdbs
45ca6ea8 2 0x40001 2 1 2048 N B informix ddbs1
45c26ea8 3 0x40001 3 1 2048 N B informix ddbs2
45ca7a00 4 0x48001 4 1 2048 N SB informix sbdbs
45bbeeb0 5 0x60011 5 1 8192 N BB informix bdbs
5 active, 2047 maximum
Note: For BLOB chunks, the number of free pages shown is out of date.
Run 'onstat -d update' for current stats.
address chunk/dbs offset size free bpages flags pathname
44cdf950 1 1 0 50000 15421 PO-B /home/informix/cheetah/dsk/rootdbs
45c98e38 2 2 0 5000 4947 PO-B ./dsk/ddbs1
45ca73b0 3 3 0 5000 3449 PO-B ./dsk/ddbs2
45ca7b60 4 4 0 5000 4587 4587 POSB ./dsk/sbdbs
Metadata 360 268 360
45bb0e48 5 5 0 5000 ~1250 1250 POBB ./dsk/bdbs
5 active, 32766 maximum
After onbar –b –w ,and the backup order can be showed in the bar_act.log infor as following:
2007-06-15 04:41:10 5218 5216 /home/informix/cheetah/bin/onbar_d -b -w
2007-06-15 04:41:10 5218 5216 Archive started on rootdbs, sbdbs, bdbs, ddbs2, ddbs1 (Requested Level 0).
2007-06-15 04:41:11 5218 5216 Begin level 0 backup rootdbs.
2007-06-15 04:41:11 5218 5216 Successfully connected to Storage Manager.
2007-06-15 04:41:11 5218 5216 Completed level 0 backup rootdbs (Storage Manager copy ID: 751421824 0).
2007-06-15 04:41:12 5223 5218 Process 5223 5218 successfully forked.
2007-06-15 04:41:12 5223 5218 Begin level 0 backup sbdbs.
2007-06-15 04:41:12 5223 5218 Successfully connected to Storage Manager.
2007-06-15 04:41:12 5224 5218 Process 5224 5218 successfully forked.
2007-06-15 04:41:12 5224 5218 Begin level 0 backup bdbs.
2007-06-15 04:41:12 5224 5218 Successfully connected to Storage Manager.
2007-06-15 04:41:12 5225 5218 Process 5225 5218 successfully forked.
2007-06-15 04:41:12 5225 5218 Begin level 0 backup ddbs2.
2007-06-15 04:41:12 5226 5218 Process 5226 5218 successfully forked.
2007-06-15 04:41:12 5226 5218 Begin level 0 backup ddbs1.
2> Normal dbspace backup order
The onbar –b –w backup on Normal dbsapces doesn’t depend on the order which the dbspace is created but depend on the size is used on that dbsapce.
In the above example, the ddbs2’s usage is bigger than the ddbs1’s usage, the onbar backup’s order is the ddbs2 first and then the ddbs1 without following the create order.
In IDS 11 you can dynamically change
the Enterprise Replication configuration parameters. Thechangestakes effect immediately, while the replication is running. Following are the three commands to use, which are explainedindetail along with examples.
Use the cdr change config
command toreplace theexisting value of an Enterprise Replication configuration parameterwith anew value while replication is active. All Enterprise Replicationconfiguration parametersand environment variables can be changed with this command. Thevalue persists while replicationis active; the ONCONFIG file is not updated, if you want to make thechange permanent it is better to update the values in ONCONFIG instead.The list of configuration and environment variables that this commandcan be used is shown in Table 1.1
For example the CDR_SUPPRESS_ATSRISWARN configuration parameter is setto suppressthe generation of error and warning messages 1, 2, and 10, so that itappearsin the ONCONFIG file as: CDR_SUPPRESS_ATSRISWARN 1,2,10. The followingcommandchanges the suppressed error and warning messages to 2, 3, 4, 5, and 7
Before changing the values you can confirm the existing values by doing:
$onstat -g cdr config | grep SUP
CDR_SUPPRESS_ATSRISWARN configuration setting:1-2,10
Now lets change the values to 2,3,4,5 and 7:$cdr change config "CDR_SUPPRESS_ATSRISWARN 2-5,7" WARNING:The value specifed updated in-memory only.
The warning clearly indicates that the chages are in-memory only, thatmeans once the replication is stopped and restarted the original valuesfrom
ONCONFIG will take effect. Now lets confirm the new values:$onstat -g cdr config | grep SUPCDR_SUPPRESS_ATSRISWARN: CDR_SUPPRESS_ATSRISWARN configuration setting:2-5,7
Use the cdr add config command to add one or more values to an EnterpriseReplication configurationparameter while replication is active. The value persistswhile replication is active; the ONCONFIG file is not updated. This option isavailableonly for configuration parameters and environment variables that allowmultiple values. The allowed configuration and environment variablesare listed in the Table 1.1.
Followingthe "change config" example, if you want to add another error andwarning message number for suppression to the existing list of 2,3,4,5and 7, say number 10, you can do that by using the following command.$cdr add config "CDR_SUPPRESS_ATSRISWARN 10" WARNING:The value specifed updated in-memory only.
We can confirm the new values from "onstat -g cdr config" as following.$onstat -g cdr config | grep SUPCDR_SUPPRESS_ATSRISWARN: CDR_SUPPRESS_ATSRISWARN configuration setting:2-5,7,10
Use the cdr removeconfig
command to removethe
existingvalue of an Enterprise Replication configuration parameter
whilereplication is active. The value persists while replication
is active; the ONCONFIG file isnot updated. The allowed configuration and environment variablesare listed in the Table 1.1
Again, following the same example from "add config", lets now try toremove suppression for error and warning messages 2,3,4 and 5andkeep 7 and 10 by using the "cdr remove config" command as following.$cdr remove config "CDR_SUPPRESS_ATSRISWARN 2-5" WARNING:The value specifed updated in-memory only.
Confirm your changes:$onstat -g cdr config | grep SUPCDR_SUPPRESS_ATSRISWARN: CDR_SUPPRESS_ATSRISWARN configuration setting: 7,10 NOTE
: Thecommands changeconfiguration parameters only in memory; they do not update theONCONFIG file. To update environment variables in the ONCONFIG file,use the CDR_ENV configuration parameter.
The following table shows which kind of changes are valid for eachEnterprise Replication configuration parameter.
| || || || |
|ConfigurationParameter ||cdr add config ||cdrchange config ||cdr remove config |
|CDR_DBSPACE || ||Yes || |
|CDR_DSLOCKWAIT || ||Yes || |
|CDR_ENV CDR_LOGDELTA || ||Yes || |
|CDR_ENV CDR_PERFLOG || ||Yes || |
|CDR_ENV CDR_RMSCALEFACT || ||Yes || |
|CDR_ENV CDR_ROUTER || ||Yes || |
|CDR_ENV CDRSITES_731 ||Yes ||Yes ||Yes |
|CDR_ENV CDRSITES_92X ||Yes ||Yes ||Yes |
|CDR_ENV CDRSITES_10X ||Yes ||Yes ||Yes |
|CDR_EVALTHREADS || ||Yes || |
|CDR_MAC_DYNAMIC_LOGS || ||Yes || |
|CDR_NIFCOMPRESS || ||Yes || |
|CDR_QDATA_SBSPACE ||Yes ||Yes ||Yes |
|CDR_QHDR_DBSPACE || ||Yes || |
|CDR_QUEUEMEM || ||Yes || |
|CDR_SERIAL || ||Yes || |
|CDR_SUPPRESS_ATSRISWARN ||Yes ||Yes ||Yes |
|ENCRYPT_CDR || ||Yes || |
|ENCRYPT_CIPHERS || ||Yes || |
|ENCRYPT_MAC ||Yes ||Yes ||Yes |
|ENCRYPT_MACFILE ||Yes ||Yes ||Yes |
|ENCRYPT_SWITCH || ||Yes || |
So now that we've covered the BTS blade, let's look at another newdatablade, while first introduced in 10.00.FC6, the Binary Datablade(BDT for short), needs some introduction.
The BDT blade is designed to allow for the storage and manipulation of binary encoded strings. It supports two new datatypes, andallows for logical operations like AND, OR, XOR and NOT to be usedagainst these datatypes.
Like any official datablade, it is registered/installed by the use ofthe blademgr function.
After registering it to your database you now have two additional UDT's
- binaryvar- UDT which stores binary string of up to 255 characters
- binary18- UDT which stores a fixed 18 byte binary string. If the value insertedinto this UDT is less tha 18 bytes, the field is right paddedwith zeroes.
Binary data is entered into the colums by use of ascii hexvalues. You can choose to preface the value by "0x" or not. The datamust also be entered in pairs (since a byte of information is from 00to FF in hex). That means inserting the following will generate anerror:
But inserting the Following will not generate an error:
Currently the above datatypes can be manipulated by the followingfunctions:
- bit_and() -performs a logical AND on two BDT datatypes or one BDT datatype and astring constant.
- bit_complement() - performs a logical NOT on a single BDT datatype.
- bit_or()- performs a logical OR on two BDT datatypes or one BDTdatatype and a string constant.
- bit_xor()-performs a logical XOR on two BDT datatypes or one BDT datatype and astring constant.
Also the following aggregates support the BDT datatypes:
- COUNT DISTINCT()
Pleas note the following are not supported:
The BDT Blade supports b-tree indices for both single column andcomposite indices.
So now that we have the details, let create an example. We will createa table in the stores_demo database called bdt_example, and insert somerows.
CREATE TABLE bdt_example (bin_id serial, bin_val binary18) ;
INSERT into bdt_example values (0, '102039847137');
INSERT into bdt_example values (0, '0xFE');
INSERT into bdt_example values(0, '49424d434f52504f524154494f4e32303036');
So let's run :
SELECT DISTINCT (bin_val) FROM bin_val;
As you can see the values for the binary18 which were inserted that hadless than 18 bytes had zeroes padded on the end.
So let's see what the complement of the data would be:
SELECT bit_complement(bin_val) from bdt_example;
Again you can see where the logical not was run against these threevalues.
The BDT datablade also has some additional diagnostic type functions.They are:
- BDTRelease() - This function tells you what version of theBDT datablade you are using.
- BDTTrace(filename)- This function specifies the location of the debug/trace file for theBDT datablade.
- LENGTH(column)- This function specifies the length, in bytes, of the BDT columnspecified.
- OCTECT_LENGTH (column)- This function specifies the length, in bytes, of the BDT columnspecified.
The LENGTH and OCTECT_LENGTH return the same values for BDT datatypes.
One additional addendum. The BDT is fully supported for use in ER.
Additional information on the Binary Datablade can be found in TheBuilt-In DataBlade Modules User's GuideMark Jamison
BTSPart 2BTSPart 1
So now that we have set up the BasicText Search Blade, or BTS forshort, the next question is what performance tuning tools areavailable, andhow do we primarily troubleshoot problems?
BTS comes with 4 functions in addition to the bts_contains function.They are:
- bts_tracefile :This functions lets you designate the location and name of you tracefiles.
- bts_tracelevel : This enables tracing at the session level for BTS calls
- bts_index_compact: Think of this function as the Index Cleaner for the btsblade.
- bts_release: Allows you to skip using blademgr to find out what version of theblade you are using.
The biggest performance tuning tool is bts_index_compact. From anadministrative standpoint, we treat the BTS indices the same way wetreat btreeindices, with one exception, we have no automated thread which goes outandclean the BTS indices, for btree indices we have btscanner threads.bts_index_compact is the BTS version of what the btscannerthread does.So from a performance standpoint, you should frequently runbts_index_compactin order to decrease the size, both disk space and number of elements,and thusincrease performance. Please note that you can make thisprocessautomated by creating your index with delete= 'immediate' in youstatement.This will eliminate the need for compacting the index, but will makealldeletes run longer. A better option would be to set up a function usingtheadmin api to periodically run bts_index_compact on your indices, thuscreatinga poor mans btscanner for bts, so to speak.
Another performance tuning tip is to make sure that you have createdyour extspaces on fast disk. If you create an ext space on an NFS mount you arebasically asking for horrible performance. A little bit about extspaces may bein order. External spaces, or extspaces, are exactly as they sound,they arespaces external to the database proper. What this means is that youcannot usenormal onstat's to monitor an extspace.Troubleshooting BTS Issues:
So now you have run into a problem, maybe something so simple as notremembering where your extspace is actually located, maybe you aregettingbigger problems, in either case, this is when you need to enable BTStracing.This process is actually fairly straight forward, as BTS has provided 2functions to handle this. First you will call bts_tracefile, to setyour tracefile location, then you should call bts_tracelevel which will dumpinformationto the file you specified.
Let's say that you had just forgotten the syntax and location of yourbts. Thequickest way would be to run the following:
execute function bts_tracefile('/tmp/bts.log');
execute function bts_tracelevel(10);
select * from customer where bts_contains(company,'all~');
After that just read the file bts.log
Below is a snippet from the trace file generated from the above query:
Taking a quick look at this you can see what your extspace info is.
database name = stores_demo
owner name = informix
ext space name = bts1
You also know now that your extspace is located in/work1/informix/11.10.FC1/bts_idx/stores_demo/informix
A quick word on extspaces. Extspaces are spaces that are external tothe database;as such there is no onstat that monitors them. If you forget what thename ofyour ext space is or forget its location, the above method isessentially youronly option by which to find them.One other important note about extspaces, if you want to back them up, there is only one support method, and is as follows:onmode -c block
External backup of external space
External backup of the instance onmode -c unblock
What the above means is that rather than going through the above to safely backup your BTS indices you should just plan on rebuilding your BTS indices in the case of a restore.
One other consideration when troubleshooting BTS issues, BTS does notallow formultiple bts_contains in the predicate portion of an SQL Statement.
select * from customer where bts_contains(company, "fred") orbts_contains(company, 'all')
would fail with the error:
(BTS22) - bts_error bts_contains requires an index on thesearch column
This concludes my series on the bts blade. Mark Jamison
So you want to get the BTS datablade running on a database? Well hereis a step by step Quick Reference to get your database ready to use theBTS Datablade.
- First you need to register the BTS blade in your database,this is done with the following method:
- Next you need to create an external space for your BTSindex. This is accomplished by the following method:
- Next you need to set up your instance so that the BTSVirtualProcessor is created. This is done by adding the following to the$ONCONFIG file:
Please note that at this point you have two options to start your btsVP, the first is to cycle the instance (ie, onmode -ky folllowed byoninit), the second is via the onmode -p command (ie. onmode -p +1 bts)
- Finally you need to create an index using the bts accessmethod. Below is an example of the code.
And now just to validate that the BTS blade is now up an running, let'srun quick test. We'll run the following query:
SELECT company FROM customer WHEREbts_contains(company, '%all');
And so now you should have the BTS set up and ready to run.
In our final Blog
on the BTS we will go over some performance tuning,and how to troubleshoot some issues.Mark Jamison