cheetahblog 0600028TE8 2,938 Views
Welcome to the IDS Experts Blog, an international team blog authored by IBM software engineers. These experts from the Informix Technical Support and R&D labs will be posting about a variety of topics related to Informix and IDS. A popular topic over the next few months is likely to be the 11.10 release of Informix Dynamic Server currently in Open Beta.
If you have suggestions on articles you'd like to see from support or development engineers please add a comment to the blog. Your feedback to this blog goes directly to engineers working on Informix products.[Read More]
cheetahblog 0600028TE8 3,550 Views
Depending on your existing Database Server setup, you might be able to upgrade directly to the
current version by basically installing the product in a new directory, copying a few configuration
files, and starting the new server to convert your database data. You can upgrade directly from any
of the following products: Dynamic Server Version 11.10, 10.00, 9.40, 9.30, 9.21, or 7.31.
Upgrading is an in-place migration method that uses your existing test and production hardware. The
operating system on those machines must be supported by Dynamic Server Version 11.50. Also, you must
have enough space for the system database data conversion.
Upgrading consists of these steps:
verifying the integrity of the data with oncheck, and performing a level-0 backup. If you are
using Enterprise Replication or High Availability Data Replication, stop replication.
2. Install the new product on the machine.
Important: Do not install it over the existing product.
3. Copy the ONCONFIG file to the target and set parameters that are new for the current release.
4. Start the Dynamic Server Version 11.50 instance. The database data is automatically converted.
5. Run UPDATE STATISTICS MEDIUM for non-leading index columns, then run UPDATE STATISTICS FOR PROCEDURE.
server to the old one. In the event of a problem during reversion, you can restore the level-0 backup.
This method works perfectly on Windows thought it is not documented well.
2. Make sure that you do not initialize the server when installing.
3. Copy the ONCONFIG file to the target and set parameters that are new for the current release.
4. Bring the server up using Control Panel->Services or any other method without initializing.
5. Monitor the online.log for the Conversion Successful message.
Once the upgrade has completed successfully, you can remove the old instance. When you run the
uninstaller make sure that you select "Retains all databases, but removes server binaries".
If for some reason, you like to go revert to the previous instance, restore it from the level 0 backup.
The only time this will not work on Windows is if you are upgrading from a 11.10.UC1 version to
another 11.10 version say 11.10.UC2.
Suma Vinod[Read More]
So now that we've covered the BTS blade, let's look at another newdatablade, while first introduced in 10.00.FC6, the Binary Datablade(BDT for short), needs some introduction.
The BDT blade is designed to allow for the storage and manipulation of binary encoded strings. It supports two new datatypes, andallows for logical operations like AND, OR, XOR and NOT to be usedagainst these datatypes.
Like any official datablade, it is registered/installed by the use ofthe blademgr function.
After registering it to your database you now have two additional UDT's
But inserting the Following will not generate an error:
Currently the above datatypes can be manipulated by the followingfunctions:
Also the following aggregates support the BDT datatypes:
The BDT Blade supports b-tree indices for both single column andcomposite indices.
So now that we have the details, let create an example. We will createa table in the stores_demo database called bdt_example, and insert somerows.
CREATE TABLE bdt_example (bin_id serial, bin_val binary18) ;
INSERT into bdt_example values (0, '102039847137');
INSERT into bdt_example values (0, '0xFE');
INSERT into bdt_example values(0, '49424d434f52504f524154494f4e32303036');
So let's run :
SELECT DISTINCT (bin_val) FROM bin_val;
As you can see the values for the binary18 which were inserted that hadless than 18 bytes had zeroes padded on the end.
So let's see what the complement of the data would be:
SELECT bit_complement(bin_val) from bdt_example;
Again you can see where the logical not was run against these threevalues.
The BDT datablade also has some additional diagnostic type functions.They are:
One additional addendum. The BDT is fully supported for use in ER.
Additional information on the Binary Datablade can be found in TheBuilt-In DataBlade Modules User's Guide
Mark Jamison[Read More]
Pradeep Natarajan 270000NYQQ 4,172 Views
Have you heard? IBM Informix 12.10.xC8 is publicly available for download! The most significant update to the product in this release includes on-disk encryption, also known as Encryption At Rest (EAR). The EAR feature allows you to encrypt the data at the lowest level, on the disk. Now you can rest easy, knowing that your data is encrypted and secure in the event of a physical disk loss or theft. In a world where low cost computers are physically accessible, increasingly used for temporary storage, aggregation, and analytics of data, it is vital to keep the data secure. You can start encrypting your storage spaces by upgrading to 12.10.xC8.
Pradeep Natarajan 270000NYQQ 4,181 Views
[Image source: livescience.com]
First, apologies to those who have been following this blog regularly in the past! Unfortunately, this blog had remained dormant for far too long. Make no mistake, it was just the blog that had been dormant, and neither the product nor the experts behind it. Our team has been busy enhancing our beloved IBM Informix product as well as producing content for the user community. As evidence, you will see a series of posts erupting here that point to various blogs, presentations, sample applications, and tutorials available in the public domain.
It is time to revive this blog and provide an avenue for our experts to blanket the user community with Informix wisdom!
From IDS 10.0x onwards, Enterprise Replication (ER) supports alteroperations on a replicated table while replication is active however, RENAMEsupport was not one of them but that changes in Cheetah. IDS 11.x willstart supporting RENAME on ER columns, tables and databases. This featuresimplifies DBA tasks and increases data availability. Without this feature, theDBA would have to plan on a time in which ER could be removed from the objectso that the rename could be performed. This would require advanceplanning and scheduling during non-peek hoursfor performing the rename.
When a RENAME operation is performedaffected replicate definitions will be updated to reflect the impact of theRENAME and a control message will be sent to the other servers informing themof the rename. RENAME operation is allowed onlyon a mastered replicate. It does not propagate the RENAME command itself, there are plans to implement that too inthe future. The user simply will issue a rename DDL statement on each of theservers that are affected by RENAME. If therename is a column or a table, then the replicate will be cycled (cdr stop/start replicate) in much thesame way that occurs with a table ALTER. If a database rename occurs,however, ER will be cycled (cdr stop/start). This cycling will only occur if the renameaffects something on the local node. In all cases, the rename will causea control message to be sent to the other servers within the replication domainso that the syscdr database is correctlyupdated.
Vijay Lolabattu[Read More]
Pradeep Natarajan 270000NYQQ 4,243 Views
In this latest article published today in the IBM Data Management blog, Pradeep Muthalpuredathe, Senior Manager for Informix Engineering & Support shares his thoughts about the Informix on Cloud offering. Get started with this powerful offering today.
Also check out the newly redesigned landing page for IBM Informix! All the IBM Informix resources you need are in one place - product offerings, select customer references, videos, demos, white papers, code samples, and downloads.
In March an article on installing IDSAdmin in a XAMPP environment was published on this blog. This article has just been updated by Inge Halilovic with instructions and screen shots for the latest available version of OpenAdmin Tool downloadable from the Free Product Download Site. The new version of OAT has improved graphics, simpler installation, new features such as support for Mach 11 topology, and several bug fixes.
Guy Bowerman[Read More]
The latest Beta drop of IDS Cheetah is now available, with some important new features integrated into the code line..
New features available in Beta drop 5 include:
The following Infocenter page has a complete list of new features for each Beta drop: http://publib.boulder.ibm.com/infocenter/idshelp/v111/index.jsp?topic=/com.ibm.po.doc/newfeatures_11.10_beta.html.
Guy Bowerman[Read More]
OpenAdmin Tool for IDS has been greatly enhanced in version 2.20 with a completely redesigned user interface, a new automated installer, and lots of new IDS admin functionality. OpenAdmin Tool for IDS (OAT) is a PHP-based Web browser administration tool for IDS 11 and IDS 11.5 that provides the ability to administer multiple database server instances from a single location. OAT makes administration easy by allowing you to drill down on resource usage and events, view query performance statistics, and much more. And since the tool is written in PHP and available as open source, you can customize it with your own business logic and installation requirements.
New feature highlights of OpenAdmin Tool for IDS version 2.20 include:
Download OAT version 2.20 now at https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?lang=en_US&source=swg-informixfpd, or at www.iiug.org in the Members Area.
Erika Von Bargen[Read More]
IDS now automatically collects index statistics, equivalent to thestatistics gathered by UPDATE STATISTICS in LOW mode, when you create aB-tree index on a UDT column of an existing table or if you create afunctional or VII index on a column of an existing table. Statisticsthat are collected automatically by this feature are available to thequery optimizer, removing the necessity to manually run UPDATESTATISTICS. When B-tree indexes are created, column statistics arecollected on the first index column, equivalent to what UPDATESTATISTICS generates in MEDIUM mode, with a resolution of 1% for tablesof fewer than a million rows, and 0.5% for larger tables.
You can now view statistics about completed queries in the new Querystatistics section in SET EXPLAIN. The querystatistics section of the output gives a detailed information on eachscan, join and sort statistics such as estimated cost, number of rowsscanned, estimated number of rows and number of rows produced at eachlevel. The EXPLAIN_STAT configurationparameter enables or disables the inclusionof a Query Statistics section in the explainoutput file. You can generatethe output file by using either the SETEXPLAINstatement of SQL or the onmode-Y sessionid command. Following is an example.
In explicit UPDATE STATISTICS operations in MEDIUM mode, a new SAMPLINGSIZE option in the Resolution clause can specify the minimum number ofrows to sample for column distributions.
If the Resolution clause omits the RESOLUTION keyword and specifies noconfidence level and no percent value, then the number of rows thatDynamic Server samples will be the larger of the following two values:
* The min value that you specifyimmediately after the SAMPLING SIZE keywords
* The sampling size that is required forthe default percent of rows in each bin (2.5%) and for the minimumconfidence level (0.80).
If a sampling size is specified in a Resolution clause that includesexplicit values for both the average percent of sampled rows per binand for the confidence level, then the number of sampled rows will bethe larger of these two values:
* The min value that you specifyimmediately after the SAMPLING SIZE keywords
* The sampling size that is required forthe specified percent of rows and for the specified confidence level.
If a sampling size is specified in a Resolution clause that includes anaverage percentage value but sets no confidence level, then the minimumconfidence value of 0.80 is used to calculate the actual sampling sizefor Dynamic Server to use if the specified size is smaller.
For example, the following statement calculates statistics for twocolumns of the customer table, without updating index information. Atleast 200 rows will be sampled, but the actual size of the sample mightbe larger than 200 if more rows are required to provide the default0.80 confidence level for a sample distribution that uses 50equivalence categories, with an average percentage of 2% of the sampledvalues in each bin.
UPDATESTATISTICS MEDIUM FOR TABLE customer (city, state)
SAMPLING SIZE 200 RESOLUTION 2 DISTRIBUTIONS ONLY;
Whether or not you include an explicit SAMPLING SIZE specification inthe Resolution clause, Dynamic Server records in the system catalog theactual sampling size (as a percentage of the total number of rows inthe table) at the time of MEDIUM mode UPDATE STATISTICS creation.
When you execute a SET EXPLAIN FILE TO statement, explain output isimplicitly turned on. The default filename for the output issqexplain.out until changed by a SET EXPLAIN FILE TO statement. Oncechanged, the filename remains set until the end of the session or untilit is changed by another SET EXPLAIN FILE TO statement.
The filename can be any valid combination of optional path andfilename. If no path component is specified, the file is placed in yourcurrent directory. The permissions for the file are owned by thecurrent user. The output file that you specify in the SETEXPLAIN statement can be a new file or an existing file. In DynamicServer, if the FILE TO clause specifies an existing file, the newoutput is appended to that file. Following is an examplewhere the explain output is sent to file "explain.vijay" under "/tmp"directory.
SETEXPLAIN FILE TO '/tmp/explain.vijay'
These features reduce the need for DBAs to perform UPDATE STATISTICS onindexes, enabling better query plans and ease of use.
Vijay Lolabattu[Read More]
Have you ever run into a situation where certain ER administrative commands executed and that caused ER to fail? Without knowing the exact sequence of activities, it would be very difficult to identify the actual commands that caused the problem. Unfortunately, not all ER commands generate alerts or update message log. If you did not use a script to run ER administrative commands, there would be no easy way to track the sequence of activity caused the failure.
An undocumented mechanism could be use for track ER object state changes. Once you enable this tracking mechanism, Informix server generates information about ER administrative operations in server message log file with full command line arguments, which could help analyzing ER issues. By default it would be disable. To enable the tracking either use –
change onconfig "CDR_FEATURES
Prior to Cheetah, a whole-system backup (onbar -b-w) was serial backup of all storage spaces andlogical logs based on a single checkpoint. That time is stored with thebackup information. Theadvantage of using a whole-system backup was that you canrestore the storage spaces with or without the logical logs.Because the data in all storage spaces is consistent in a whole-systembackup, you do not need to restore the logical logs to make the dataconsistent. Level 0, 1, or 2 backups are supported.
One of the new features of Cheetah is Backup/Restore dbspacesin parallel with whole system functionality . With parallelbackup , multiple processes run simultaneously each processbacking up a different dbspace . In most cases, parallel backups complete fasterthan serialbackups, which use only one process.
ONCONFIG parameter "BAR_MAX_BACKUP" isused for whole system backups/restores as well. TheBAR_MAX_BACKUP parameter specifies the maximum number of parallelprocesses that are allowed for each onbar command. Both UNIX andWindows support parallel backups. Although the database server defaultvalue for BAR_MAX_BACKUP is 4, the onconfig.std value is 0.
To specify parallel backups and restores, including parallel wholesystem backups and restores, set BAR_MAX_BACKUP to a value higher than1. For example, if you set BAR_MAX_BACKUP to 3 and execute anON–Bar command, ON–Bar will spawn the maximum 3 processes concurrently.
If BAR_MAX_BACKUP to 0, the system creates as manyON–Bar processes as needed. The number of ON–Barprocesses is limited only by the number of storage spaces or the amountof memory available to the database server, whichever is less.
To perform a serial backup or restore, including a serialwhole system backup or restore, set BAR_MAX_BACKUP to 1.
Also in Cheetah, storage manager efficiency isimproved by improvingdbspace ordering . "Biggest" dbspaces are backed up first and the "smallest"last.Dbspace with most pages used is the "biggest" and will be backed upfirst. This ensures better parallelism, no matter how BAR_MAX_BACKUP isset or how many pages are to be backed up in different dbspaces. Neworder of backup/restore is also effective for non-wholesystembackup/restore. Restore is done in the same order as backup to ease storagemanager's access to stored objects.
To implement new ordering of dbspaces for backup and restore changes are made to the bar_instance table in the sysutils Database .
New column "ins_backup_order" (integer default 0 not null) is added to the table bar_instance .
Also changes are made to ixbar file :
New field with backup sequence integer (Last field in the line ) is added in ixbar file.
Value of the new field is always "0" (zero) for log backup objects.
Even if the BAR_MAX_BACKUP is set to 1 for serial backup,the dbspaces will be backed up using the new dbspace order .
Only 1 checkpoint for all dbspaces - just before backup of rootdbs .
Rootdbs is backed up first, without parallelism (no change here).
Before image processor threads "arcbackup2" are started at this time, one for each dbspace (more threads running in parallel.)
As each dbspace backup completes, the respective "arcbackup2" thread exits ( less "arcbackup2" threads as backup progresses. )
New order of backup. The order is based on the used-pages count at the start time of backup. Dbspace with most pages used is the "biggest" and will be backed up first.
Not much changed, only it is now parallel.
Rootdbs is restored first, without parallelism (no change here).
Restore dbspaces in same order as they were backed up. Use the value of "ins_backup_order" to determine correct order. If BAR_MAX_BACKUP is changed between backup and restore, the objects will still be restored in the same order, however the timing relative to each other may differ significantly. This could have negative effects on SM performance .
Rashmi Chawak[Read More]
Hopefully, you've already heard about the Informix wire listener. But did you know that there are actually three different types of listeners? This means that the listener provides application developers with three completely different paradigms for connecting their applications to the Informix database server.
Suppose you are rapidly developing a new application and crave the flexibility and power that the schemaless "NoSQL" data model will provide you to adapt to use cases going forward that you cannot necessarily foresee at the current moment. Or maybe you want a quick and easy way to get data into and out of Informix for a web application developed with the newest and hottest web framework, and you want to do so in a way that doesn't make you think or worry about whether the underlying data is in a relational, time series, or JSON schema. Or maybe you are even developing an Internet of Things applications for a wide array of sensors that will be gathering data that needs to be pushed to your database in a way that is simple, light weight, and asynchronous. The Informix wire listener can help you build such solutions, and many more! And it allows you to build such applications all the while harnessing the performance, scalability, and reliability in the enterprise-class, embeddable database that is Informix.
So what are these three listener types?
The Mongo listener was the first and original listener type and allowed applications developed for MongoDB to run against Informix without modification. This enabled an entirely different type of application -- flexible, schemaless, and in line with a rapid application development paradigm -- to be run against the Informix database server. It also enabled a whole new ecosystem of modern and open source drivers, those developed for the MongoDB protocol, to be used to connect applications to Informix.
But the original Mongo listener type was about more than just Mongo compatibility. It enabled a new "hybrid" application model -- one that combined traditional relational data with the new NoSQL, JSON-based data. The listener was developed to seamlessly work with any type of data that Informix supports, be it relational, JSON, or time series.
Want to learn more about using Mongo drivers to connect to Informix? Check out a great tutorial written by Martin Fuerderer and hosted on the Informix and NoSQL blog. This tutorial takes you step by step through understanding the listener, NoSQL data, and how to use Mongo syntax and tools to connect to Informix.
REST. So ubiquitous in the world of web applications. This was the next evolution of the Informix wire listener. The listener's REST interface allows any application to get data into and out of Informix using HTTP. A simple interface that opened up Informix connectivity to just about every modern programming language without the need for an Informix-specific driver.
Want to learn about using REST to communicate with Informix? Head on over to Martin Fuerderer's article First Steps with the REST API and then check out the REST API syntax page in the Informix Knowledge Center to get more details and examples of the REST syntax for accessing and querying data.
New to 12.10.xC7, the MQTT listener type enables you to use the MQTT publish model to insert data into the Informix database server. This is perfect for Internet of Things scenarios where data collected on the "edge" needs to be published and stored in a central data store in a light weight and asynchronous way.
Want to learn more about using MQTT to publish data to Informix? Check out Working with the new 12.10.xC7 MQTT Listener over on Shawn Moe's The Lenexa View blog for an in-depth discussion of the MQTT listener, including some sample Java code.
And last but not least, it is worth reiterating that all three listener types work seamlessly on relational, JSON, timeseries data, or even a combination. No matter the underlying way in which the data is stored, the Informix wire listener will manage those details automatically for you, allowing you as the application developer to focus on your application instead of the nuances of data access.
Want to get started, but need more info on how to start different or multiple listener types? You can find that here in the IBM Informix Knowledge Center.
I don’t know about the rest of you, but I have had a lot ofproblems truly determining if my SQL Cache settings are efficient. If my UserDefined Routine (UDR) cache, for example, always seems to be 50% full, does that mean thecache only uses 50% and so the settings are to high, or are they at 50% becauseI’m constantly cleaning the pools, and thus I’m configured too low. Beginning in Cheetah you now have a means to diagnose thattype of information, and it’s located in the sysmaster database. The table name is syssqlcacheprof, and it contains profileinformation for each of the caches.
Below is a sample output from this table:
As you can see in the above you can now quickly identify how often the cache is removing entries (orcleaning), and what the efficiency is of your cache, in terms of a hit rate. Infact a very simple query that could be used for analyzing the hit ratio for the caches would be:
select *, (hits/(hits+misses)) hit_ratio from syssqlcacheprof
This table also provides youthe benefit of quickly seeing how much memory each cache is actually using,something that before Cheetah was a bit awkward to calculate. While not a major feature of Cheetah, this new sysmaster table is a valuable new asset in performance tuning.
Mark Jamison[Read More]