Informix Experts




For a complete list of current SQL reserved words, see the IBM Informix Guide to SQL: Syntax.


Sanjit Chakraborty[Read More]

The Database Explorer Window

cheetahblog Tags:  dba database developer explorer 5,312 Views
Note if you have missed any articles in this series, check Introductionand Table of Contents

TheDatabase Explorer Window

Now that you have a workspace, and a connection established, it is timeto take a quick look around the options available to you in theDatabase Explorer window. By default this window is located in thelower left hand corner, and on start up looks like this:


Database Explorer


As you can see this has a tool bar as well as the Connections folder.Let's take a quick look at the options you have on the tool bar.Traveling from Left to right, you have:

  • Collapse all:This allows you to quickly collapse all the folders you have opened inyour explorer, as shown in the above dialog.
  • Link:This allows you to link data in the explorer to an open editor.
  • SQL Scrapbook:This lets you open up a file that saves the SQL you aretesting. 
  • Disconnect:Will disconnect any connection you have highlighter.
  • New Connection:Will take you to the connection tool, so you can add more connectionsand Databases to your Database Explorer.
  • SQL Editor:This lets you go straight to the SQL Editor Window.
  • ExportConnection: This lets you export an existing connection toXML. Makes it easy to deploy connections to multiple machines.
  • ImportConnection: Allows you to import the connection settingsfrom an XML file. So if, for instance, someone were to export theconnection settings for a development database, rather than having toget all the information to fill the connection string yourself, all youhave to do is import the connection.


Now let's open up our connections  folder in this window. We willbe looking at the stores_demo  database. So first we open theConnections folder, and then the stores_demo connection folder, andfinally the stores_demo database folder. And when the folders are open you will see something like the following:

Stores Demo Database Folder Opened

As you can see, each connection tells you what type of instance you areconnecting too, in my case all of my instances are 11.10, thoughDeveloper Workbench reports them as 11.0.  You can see thatthe Database folderhas three sub-folders. Those Folders are:

Click on each link to get a detailed overview, and examples, on whateach of these folders can do.

Mark Jamison
[Read More]

OAT Version 2.23 Available Now!

cheetahblog Tags:  openadmin oat 5,264 Views

The newest version of OpenAdmin Tool for IDS, Version 2.23, has just been released! Download the new version today to get the enhanced SQL Explorer and version 2.0 of the ER Plugin.



New feature highlights:



The SQL Explorer has been newly redesigned. New features include

  • Filters and search fields for viewing SQL tracing data.
  • Support for different SQL trace levels - global or user - so you can manage what kinds of SQL information are traced for IDS V11.50.xC1 servers.
  • Support for suspending and resuming history tracing, without releasing resources, on IDS V11.50.xC1 servers.


ER Plug-in Version 2.0: Version 2.0 of OAT’s ER plug-in includes a new Replicate Explorer and becomes the first step in supporting ER administration graphically through OAT.





  • The Replicate Explorer now lets you monitor your replicates and replicate sets.
  • Screenshot of the Replicate Explorer



  • The Node Details -> Configuration page now supports the editing and updating of ER configuration parameters






Download OAT Version 2.23 now at https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?lang=en_US&source=swg-informixfpd



For additional information on OpenAdmin Tool for IDS, including feature details, screenshots and demos, go to www.openadmintool.org.

Erika Von Bargen

[Read More]

New Common Development tool for IDS and DB2

cheetahblog Tags:  developer client tool common 5,263 Views

CommonDevelopment Tool

One of the big ideas recently proposed by IBM, and further proof of our commitment to IDS, has been the idea of a common setof clients tools (like a Java Common Client (JCC), for example). If you aregoing to have a common set of clients, or programming interfaces, itmight not be a bad idea to have a common development tool that willallow you to use these common clients, for both DB2 and IDS. Well IBMhas released in beta an Eclipse based common tool that allows that. Forthose who want to jump right in here is the link:

IBMData Server Developer Workbench 9.5

As with many things IBM , since this is an open beta the actual name atthe time of  release may change. I will be blogging on thisproduct over the next few weeks covering its capabilities from the IDSside of the tool. So far there will be 5 parts to this report, notcounting this introduction.



Part 1 - Installation for IDS
Part 2 - Setting up your workspace and connecting to IDS.
Part 3 - The Database Explorer window.
Part 4 -  ER Diagramming
Part 5 - The SQL Builder and The SQL Editor Windows.
Part 6 - PureQuery and IDS



Mark Jamison

[Read More]

Derived Table Support in IDS 11.10

cheetahblog Tags:  iterator funtion derived table multiset 5,258 Views
As mentioned in this blog, IDS 11.10 allows SELECT statements to include syntax to specify a full select sub-query in the FROM clause. I would like to describe it with some examples here.In IDS 10, if you had to use sub-queries in the FROM clause, you had to cast the sub-query statement to a multiset andthen to a table. Here are some example SELECT statements syntax in prior IDS versions:
    SELECT * FROM table(multiset(select a, b, c from tab1, tab2 where tab1.a = tab2.c)) vt(va, vb, vc),emptab WHERE vt.va = emptab.id;
    a, b, c: columns of any data type

    SELECT * from table((select coll_dat from sales_tab)) AS c_table(s_month, s_day);
    coll_dat: should be a collection data type column

    SELECT * from table(function f1());
    f1(): an iterator function

In IDS 11.10, the above statements can be written in the same way or you can re-write them as:
    SELECT * FROM (select a, b, c from tab1, tab2 where tab1.a = tab2.c) vt(va, vb, vc), emptab WHERE vt.va = emptab.id;

    SELECT * from (select coll_dat from sales_tab) AS c_table(s_month, s_day);

    SELECT * from table(f1());
    f1(): f1() can be any function

These sub-queries in the FROM clause are called derived tables or table expressions. You can use simple, UNION, orjoined sub-queries, including OUTER joins, and can include the ORDER BY clause. You can also write nestedFROM clause queries and can combine old and new syntax in a SQL statement. Here is an example:
    select * from (select * from (select col1 from tab3 where col1 = 10 order by col2 desc)),table(multiset(select col1 from tab2 where col1 = 20)) order by 1 ;

The derived table support in IDS 11.10 enhances its capability to execute queries that complies with the SQL-92 standard without any modifications.

Suma Vinod[Read More]

Establishing JCC connections in 11.10

cheetahblog Tags:  jdbc development jcc connectivity 5,233 Views

EstablishingJCC connections for IDS


While working with the open beta of IBMData Server Developer Workbench 9.5 , I noticed that inaddition to that product you also become open beta participants for theJava Common Client (JCC) driver. In fact, the Workbench uses the JCCdriver by default. On the one hand, this is absolutely great, because ithelps you see where IBM's long range direction is with clients. On theother hand, JCC requires some special setup on the IDS instance level.This article will detail how to setup you instance to handle JCCconnections, and then will use the Developer Workbench to show how youconnect using JCC.

Whathappened to my SQLI?

The reason why JCC connections require engine configuration changes isthe default means by which IDS instance communicate is through the SQLIprotocol, whereas JCC needs the DRDA protocol. Currently IDS does notallow for both SQLI and DRDA to connect through the same listenerthread, so this means is we need to configure a listener thread andport to accept DRDA calls.

This is done by making a simple change to the SQLHOSTS file as shownbelow:

<instance name> drsoctcp<hostname> <service/port>

or

<instance name> drsipcshm<hostname> <service/port>

As you can see the protocols look the same as our old ones, exceptinstead of the prefix ol,you use dr.

And full example is below:

cheetah_dr drsoctcp gofishsqlexec2

After changing the SQLHOST all you need to do is place this entry in youonconfig file, in the DBSERVERALIASES variable. So if you use theexample SQLHOSTS entry above and had no aliases currently you wouldhave the following in you onconfig file:

DBSERVERALIASES cheetah_dr

Now just bring your instance off-line and then back on-line, and then weare ready to use the Developer Workbench.

Setting up yourJCC Driver for 11.10 in Developer Workbench 9.5

Now that you have set your IDS instance up to accept JCC driverconnections, let's establish one using the Developer Workbench 9.5

In your Database Explorer window, select Connection->New Connectionas below:



This will bring you to the Connection Settings window. Below is anexample of the JCC driver with all field filled in:

JCC Connection Settings

As you can see, you do not need an INFORMIXSERVER for your connection,just the Database, Host and Port Number. Once you configure this foryour server, just click the TestConnection button. After that just click the Finish button. Younow have a JCC connection that you can use for your Applicationrunning against IDS 11.10

Mark Jamison
[Read More]

One Listener, Three Ways to Connect your Applications to Informix

Erika Von Bargen Tags:  nosql rest informix listener mqtt 5,189 Views

Hopefully, you've already heard about the Informix wire listener. But did you know that there are actually three different types of listeners? This means that the listener provides application developers with three completely different paradigms for connecting their applications to the Informix database server.

 

Suppose you are rapidly developing a new application and crave the flexibility and power that the schemaless "NoSQL" data model will provide you to adapt to use cases going forward that you cannot necessarily foresee at the current moment. Or maybe you want a quick and easy way to get data into and out of Informix for a web application developed with the newest and hottest web framework, and you want to do so in a way that doesn't make you think or worry about whether the underlying data is in a relational, time series, or JSON schema. Or maybe you are even developing an Internet of Things applications for a wide array of sensors that will be gathering data that needs to be pushed to your database in a way that is simple, light weight, and asynchronous. The Informix wire listener can help you build such solutions, and many more! And it allows you to build such applications all the while harnessing the performance, scalability, and reliability in the enterprise-class, embeddable database that is Informix.

 

So what are these three listener types?

  • The Mongo listener

The Mongo listener was the first and original listener type and allowed applications developed for MongoDB to run against Informix without modification. This enabled an entirely different type of application -- flexible, schemaless, and in line with a rapid application development paradigm -- to be run against the Informix database server. It also enabled a whole new ecosystem of modern and open source drivers, those developed for the MongoDB protocol, to be used to connect applications to Informix.

But the original Mongo listener type was about more than just Mongo compatibility. It enabled a new "hybrid" application model -- one that combined traditional relational data with the new NoSQL, JSON-based data. The listener was developed to seamlessly work with any type of data that Informix supports, be it relational, JSON, or time series.

Want to learn more about using Mongo drivers to connect to Informix? Check out a great tutorial written by Martin Fuerderer and hosted on the Informix and NoSQL blog. This tutorial takes you step by step through understanding the listener, NoSQL data, and how to use Mongo syntax and tools to connect to Informix.

  • The REST listener

REST. So ubiquitous in the world of web applications. This was the next evolution of the Informix wire listener. The listener's REST interface allows any application to get data into and out of Informix using HTTP. A simple interface that opened up Informix connectivity to just about every modern programming language without the need for an Informix-specific driver.

Want to learn about using REST to communicate with Informix? Head on over to Martin Fuerderer's article First Steps with the REST API and then check out the REST API syntax page in the Informix Knowledge Center to get more details and examples of the REST syntax for accessing and querying data.

  • The MQTT listener

New to 12.10.xC7, the MQTT listener type enables you to use the MQTT publish model to insert data into the Informix database server. This is perfect for Internet of Things scenarios where data collected on the "edge" needs to be published and stored in a central data store in a light weight and asynchronous way.

Want to learn more about using MQTT to publish data to Informix? Check out  Working with the new 12.10.xC7 MQTT Listener over on Shawn Moe's The Lenexa View blog for an in-depth discussion of the MQTT listener, including some sample Java code.

 

And last but not least, it is worth reiterating that all three listener types work seamlessly on relational, JSON, timeseries data, or even a combination. No matter the underlying way in which the data is stored, the Informix wire listener will manage those details automatically for you, allowing you as the application developer to focus on your application instead of the nuances of data access.

Want to get started, but need more info on how to start different or multiple listener types? You can find that here in the IBM Informix Knowledge Center.

Modified on by Erika Von Bargen

Primary can not send log that requested by RSS

cheetahblog 5,089 Views

Other day I was setting up a high availability cluster environment and ran into to an interesting problem. I followed all necessary instruction for setup a RSS server. However, RSS stuck in recovery process and message log on primary serer reported error that could not send log. For example, when executed following command on RSS server to set data replication type:

      onmode -d RSS <primary server name>

 

 The RSS server stuck in recovery mode and message log on primary server showing following messages:

        RSS Server <RSS server name> - state is now connected             
        Can not send log <log number>     

 

The log number mentioned in error message was not close to the current log on primary or RSS server. For example, current log on primary was 7438 and on RSS 7436 but message log stating ‘Can not send log 825241904’. So, from where server getting a out of sequence log number?

 

Initially I though it some kind of corruption. However, after some investigation figured out, I was using delayed application (DELAY_APPLY) on RSS server and the directory specified with LOG_STAGING_DIR configuration parameter holding some unwanted file(s). For example, file 'ifmxUniqueLog_825241904' in LOG_STAGING_DIR. So, during recovery RSS server requested to primary for log number 825241904 but that log not exists on primary server.       

 

Once I removed all files from the LOG_STAGING_DIR directory on RSS server able to successfully set the high availability cluster environment. Conclusion, next time you try to setup a RSS with DELAY_APPLY, make sure nothing is in the LOG_STAGING_DIR on RSS server.

 

-Sanjit Chakraborty

HTTPS on OAT

cheetahblog 5,064 Views

Introduction



HTTPS protects the OAT webserver from eavesdropping, tampering, and message forgery. It protects the OAT webserver from hackers who are trying to secretly listen or interfere with the OAT network. Enabling HTTPS will encrypt OAT clients’ messages before sending it to the OAT webserver. This prevents hackers from listening over the line and stealing sensitive information. Sometimes hackers will setup a fake OAT webserver and deceive OAT clients as to whom the real OAT host is. Enabling HTTPS also allow OAT clients to authenticate with the OAT host, so that hackers cannot deceive OAT clients with a fake OAT webserver.



Please note that HTTPS only encrypts communication between OAT webserver and OAT client. It does not encrypt communication between IDS server and the OAT webserver.



IBM® Informix® Dynamic Server, Version 9.4 and later, enables encryption of data between IDS server and OAT webserver using an encryption communication support module. You can find more information in the following link:

http://www.ibm.com/developerworks/db2/library/techarticle/dm-0401dandekar/index.html

Enabling HTTPS in OAT will involve the following steps:

  • 1. Replacing OAT’s Apache webserver with another mod_ssl enabled Apache webserver.
  • 2. Creating an encryption key and a certificate for your new OAT webserver, so that OAT clients can authenticate your webserver based on your certificate.
  • 3. Configure httpd.conf (the Apache configuration file) to enable HTTPS



Replacing OAT’s Apache Webserver with another mod_ssl enabled Apache Webserver



In order to use OAT with https, you will need to have an apache webserver, PHP with some extensions and the apache mod_ssl module. HTTPS functionality is provided by the apache mod_ssl module. Using the OAT automated installer will install an apache webserver and PHP with all the required PHP extensions, however apache webserver bundled with installer is NOT shipped with the mod_ssl module.



Therefore you have to install another apache webserver compiled with the mod_ssl module to replace OAT’s apache webserver. Then dynamically load OAT’s PHP apache handler (a file named libphp5.so, or php5apache2_2.dll on windows, which is the apache and php ‘glue’) to your new apache webserver.



Dynamically loading the mod_ssl module to OAT’s original apache webserver is not possible because the version of OAT’s original apache webserver is 2.2.4. Mod_ssl dynamic module only supports the older apache 1.3.x, It does not support OAT’s apache webserver.



On Linux and Mac OS X,



In order to compile Apache with mod_ssl support, you need to have OpenSSL installed. OpenSSL might have been installed on your Linux distribution already and you only need to find out the installation directory. Note that the OpenSSL binaries have to be 32-bit, because the binaries from the OAT automated installer are 32-bit. We suggest doing this only on a 32-bit machine so that there will be no architectural mismatch among the binaries.



If OpenSSL has not been installed already, you can download the latest source code release from here:

http://www.openssl.org/source/

Then you have to compile the OpenSSL source code. Note that you have to compile 32-bit OpenSSL binaries because the binaries from the OAT automated installer are 32-bit. This can be done by setting CFLAGS to be –m32. Use the following commands:



cd /path/to/openssl/source/code
export CFLAGS=-m32
./config --prefix=/openssl/installation/directory/ --openssldir=/openssl/installation/directory/
make
make install



Stop OAT’s apache webserver by running the /oat/installation/directory/StopApache script. Rename OAT’s Apache_2.2.4 directory to Apache_2.2.4_noSSL. This is a will serve as a backup copy of the apache binaries. You will also need to use some configuration files from this backup Apache directory in later steps. Do make a copy.



Then you have to compile the latest Apache with mod_ssl support. Download the latest Apache source code and compile with the following commands. Note that you have to compile 32-bit Apache binaries because the binaries from the OAT automated installer are 32-bit. This can be done by setting CFLAGS to be –m32. The compilation prefix should match the old OAT apache’s directory (/oat/installation/directory/Apache_2.2.4/)



cd /path/to/apache/source/code
export CFLAGS=-m32
./configure --prefix= /oat/installation/directory/Apache_2.2.4/ --enable-so --enable-ssl --with-ssl= /openssl/installation/directory/
make
make install



Right now Apache should be installed with mod_ssl support. Use the following commands to check if mod_ssl is enabled in Apache.



cd /oat/installation/directory/Apache_2.2.4/bin/
./httpd –M
(Here you will see a list of Apache modules. See if the SSL module is on the list)



Then you have to change the Apache configurations file to load OAT’s PHP Apache handler (The PHP and Apache ‘glue’). Edit the Apache configuration file (/oat/installation/directory/Apache_2.2.4/conf/httpd.conf). Add the following lines. Uncomment if these lines exist but are commented out.



LoadModule php5_module /oat/installation/directory/PHP_5.2.4/libphp5.so
AddType application/x-httpd-php .php
PhpIniDir ‘/oat/installation/directory/PHP_5.2.4/lib’
Setenv INFORMIXDIR ‘/oat/installation/directory/Connect_3.50/’



Search for the line “Listen 80” in the httpd.conf file. This indicates the port number that the OAT webserver should run on. You should use the same port number as OAT’s original Apache webserver that comes with OAT installer.



Search for the line “ServerName www.example.com:80” in the httpd.conf file. This indicates the name and the port that the server uses to identify itself. You should use the same ServerName as the original non-SSL Apache webserver that comes with OAT installer.



Search for the line “DirectoryIndex index.html” in the httpd.conf file. This sets the files that Apache will serve if a directory is requested. Change this line to “DirectoryIndex index.html index.php”



Copy the file /oat/installation/directory/Apache_2.2.4_noSSL/bin/envvars to /oat/installation/directory/Apache_2.2.4/bin/envvars. Apache will read this file to setup the Apache environment variables for OAT to run.



Copy the entire directory /oat/installation/directory/Apache_2.2.4_noSSL/htdocs/openadmin/ to /oat/installation/directory/Apache_2.2.4/htdocs/openadmin/. All the OAT source code resides in this directory.



Run the following commands to make sure that the PHP Apache handler is properly loaded.

cd /oat/installation/directory/Apache_2.2.4/bin/
./httpd –M
(Here you will see a list of Apache modules. See if the php5 module is on the list)



Right now your new webserver should be properly setup for OAT. You may start your webserver by running /oat/installation/directory/StartApache and visit OAT using your web browser.



Note that this server has mod_ssl enabled but HTTPS is not switched on yet. A few more steps are needed to enable HTTPS in the following sections.



On Windows,



First we have to download and install Openssl from the following website:

http://www.openssl.org/related/binaries.html

Then we need to setup Apache with mod_ssl support. Download the latest Win32 Binary including OpenSSL 0.9.8g (MSI Installer). This should be available here:

http://httpd.apache.org/download.cgi

Stop the OAT Apache webserver. There should be a OpenAdmin shortcut in the start menu. You should be able to stop the OAT Apache webserver from there. Make sure that the Apache Monitor is not running on your system tray.



Rename OAT’s Apache_2.2.4 directory to Apache_2.2.4_noSSL. This is a will serve as a backup copy of the apache binaries. You will also need to use some configuration files from this Apache directory in later steps. Do make a copy.



Run the Apache MSI installer. Do a typical install and choose the installation directory to be /oat/installation/directory/Apache_2.2.4



Edit the Apache configuration file (/oat/installation/directory/Apache_2.2.4/conf/httpd.conf). Add or uncomment the following lines in the httpd.conf file:



LoadModule php5_module "c:\oat\installation\dir\PHP_5.2.4\php5apache2_2.dll"
LoadModule ssl_module modules/mod_ssl.so
AddType application/x-httpd-php .php
PhpIniDir 'c:\oat\installation\dir\PHP_5.2.4'



Search for the line “Listen 80” (or “Listen 8080”) in the httpd.conf file. This indicates the port number that the OAT webserver should run on. You should use the same port number as OAT’s original Apache webserver that comes with OAT installer.



Search for the line “ServerName www.example.com:80” in the httpd.conf file. This indicates the name and the port that the server uses to identify itself. You should use the same ServerName as OAT’s original Apache webserver that comes with OAT installer.



Search for the line “DirectoryIndex index.html” in the httpd.conf file. This sets the files that Apache will serve if a directory is requested. Change this line to “DirectoryIndex index.html index.php”



Search for the line “setenv INFORMIXDIR” in the OAT’s original Apache’s configuration file (c:\oat\installation\dir\Apache_2.2.4_noSSL\conf\httpd.conf). This line sets the INFORMIXDIR variable in the Apache environment for OAT. This has NOT been set in your new Apache webserver yet. Copy this line to your new Apache webserver’s httpd.conf file. You can put this at the end of the httpd.conf file.



Copy the entire directory c:\oat\installation\dir\Apache_2.2.4_noSSL\htdocs\openadmin\ to c:\oat\installation\dir\Apache_2.2.4\htdocs\openadmin\. All the OAT source code resides in this directory.



Run the following commands in a command prompt to make sure that the PHP Apache handler and the mod_ssl modules are properly loaded.

cd c:\oat\installation\dir\Apache_2.2.4\bin\
httpd.exe –M
(Here you will see a list of Apache modules. See if the php5 module and the ssl_module are on the list)



Right now your new webserver should be properly setup for OAT. You may now click on ‘Start Apache Service for OpenAdmin’ in OpenAdmin Menu under the Start Menu. You should be able to access OAT using your web browser.



Note that this server has mod_ssl enabled but HTTPS is not switched on yet. A few more steps are needed to enable HTTPS in the following sections.



Creating an Encryption Key and a Certificate for your OAT Webserver



Keys are used in encryption and decryption. They usually come in pairs, the public key and private key. Public keys are used to encrypt messages and private keys are used to decrypt messages. The message encrypted by a public key can only be decrypted by its associated private key.



A HTTPS enabled webserver will have its own pair of public and private key. The webserver will make its public key available to all clients. But nobody else except the webserver will know its private key. Thus all clients will be able to encrypt messages, but only the webserver can decrypt the message. If a client wants to send encrypted messages to the webserver, he will encrypt the message with the public key provided by the webserver and then send out the message. The webserver will use its secret private key to decrypt the client’s message. Hackers who are trying to listen over the network and steal the client’s message will not be able to decrypt the client’s message, because the hacker does not have the private key.



A certificate is a document authenticating a person to be whom he claims to be. A HTTPS enabled webserver will have its certificate, signed by a trusted certificate authority, to authenticate itself to be the real webserver that the clients are talking to. Before a client talks to the webserver, he will be prompted to view and accept the webserver’s certificate. To the client can make sure that the webserver’s certificate is signed by a trusted certificate authority before proceeding with the communication. This prevents hackers from setting up fake webservers to deceive clients.



Once a webserver is HTTPS enabled, clients get to choose whether they want to do a normal connection to the webserver or a secure connection to the webserver. If clients want to do a normal connection, they will type “http://webserver_url” in their web browser. If clients want to do a secure connection, they will type “https://webserver_url”. In a secure connection, the webserver will send the client its certificate and its public key. The client will be prompted to view and accept the webserver’s certificate before the webpage is loaded, so that the client can be sure that the webserver is not a fake webserver created by hackers to deceive you. Once the client accepts the certificate, the client’s web browser will use the public key that it received from the webserver to encrypt communication. Only the webserver has the associated private key, thus only the webserver can decrypt the client’s encrypted communication. Hackers cannot secretly listen and decrypt the communication.



To generate private/public key pairs and the certificate, we use the openssl executable. You should be able to find it under the bin directory of your openssl installation.



To generate a private key, use the following command:

openssl genrsa -des3 -out privkey.pem 2048

The private key will be stored in the privkey.pem file. Store this file in a secure location because this is the webserver’s secret decryption key. This file will be used to generate the associated public key. When we generate the certificate, it will look for this file, it will generate its associated public key and include the public key in the certificate.



To generate a certificate, we create need to create a certificate signing request, and send the certificate signing request to a trusted certificate authority (Such as VeriSign). The authority will then issue you a certificate. Use the following command to generate a certificate request. For more information about the process of signing certificate requests, contact your certificate authority.

openssl req -new -key privkey.pem -out cert.csr

If you don’t want to deal with another certificate authority and you just want to create a certificate for yourself, you can create a self-signed certificate. Note that this is not the recommended way of creating a certificate.

openssl req -new -x509 -key privkey.pem -out cacert.pem -days 1095

Openssl will prompt you to enter your personal information. After that the certificate will be stored in the cacert.pem file. This file will be displayed to web clients to verify your identity. It will also include the public key for web clients, thus they can start encrypting communication.



Openssl will also prompt you to enter a pass phrase. This is an extra layer of security. You will need to enter your pass phrase when you start your webserver.



For more information about encryption keys, please consult the OpenSSL documentations:

http://www.openssl.org/docs/HOWTO/keys.txt

For more information about certificates, please consult the OpenSSL documentations:

http://www.openssl.org/docs/HOWTO/certificates.txt

Configure httpd.conf (the Apache configuration file) to enable HTTPS



Edit the apache configuration file.The file should be /oat/installation/directory/Apache_2.2.4/conf/httpd.conf



Search for the following line:

#Include conf/extra/httpd-ssl.conf

This line is commented out by default. Uncomment it so that httpd.conf will include the apache ssl configuration file.



Then edit the apache ssl configuration file.The file should be /oat/installation/directory/Apache_2.2.4/conf/extra/httpd-ssl.conf



HTTPS will require one more ssl port. By default, the ssl port number is set to be 443. Make sure this port is available. If you wish to change this port, edit the Listen directive and the Virtual Host section of the httpd-ssl.conf file.


Search for the ‘SSLCertificateKeyFile’ directive and the ‘SSLCertificateFile’ directive in the httpd-ssl.conf file. These two directives indicate the location of your private key file and the certificate file. Make sure that they point to the privkey.pem and the cacert.pem created in the previous section.


Search for the ‘SSLCipherSuite’ directive. This directive indicates the ciphers for your HTTPS webserver. By default the HTTPS webserver will accept all encryption ciphers. If you wish to accept only the seven strongest ciphers, edit the directive as follow, or else you can keep the default configuration:

SSLProtocol all
SSLCipherSuite HIGH:MEDIUM

For more information about HTTPS configurations, please refer to the following website:
http://httpd.apache.org/docs/2.0/ssl/ssl_howto.html

Final Testing



Your OAT webserver should be secured with HTTPS right now. Restart your webserver and launch OAT with your web browser. Instead of using http://hostname:portnumber/openadmin, try using https://hostname:ssl_portnumber/openadmin. You will be prompted to view and accept OAT webserver’s certificate before the OAT login page is launched.


Note:
On Linux, you can restart your webserver by running the StopApache script and then the StartApache script in the OAT installation directory. You will be prompt to enter the pass phrase before you can start you webserver.


On Windows, you have to restart your webserver with the command prompt. Starting the webserver with Apachemonitor.exe or with the start menu shortcuts doesn’t work because they do not support pass phrases. Use the following command to restart your webserver.

httpd –k restart



If you wish to use Apachemonitor.exe or with the start menu shortcuts to control your webserver, you have to disable pass phrases. Note that this reduces the level of security.

Run the following command:

cd c:/openssl/installation/directory/bin/
openssl rsa -in privkey.pem -out privkey_nopassphrase.pem



Now an unencrypted copy of your private key will be stored in the privkey_nopassphrase.pem file. Edit your httpd-ssl.conf file SSLCertificateKeyFile directive to use the privkey_nopassphrase.pem file instead of the privkey.pem file. Then you will not be prompted to enter a pass phrase when starting your webserver. Apachemonitor.exe and the start menu shortcuts should work now.



Leo Chan

[Read More]

New onstat option for ER

cheetahblog Tags:  sbspace rqm onstat er 5,003 Views
If you use Enterprise Replication(ER), you can use onstat -g rqm to print the statistics and contents of the low-levelqueues managed by the Reliable Queue Manager(RQM). From IDS 10.00.xC7 onwards, a new option to print information aboutthe sbspaces configured for ER is available.

onstat -g rqm SBPACES prints detailed statistical information about the sbspaces configured for CDR_QDATA_SBSPACE.A sample output is shown below:
$ onstat -g rqm SBSPACES

IBM Informix Dynamic Server Version 11.10.FC1 -- On-Line -- Up 00:18:03 -- 67584 Kbytes

RQM Space Statistics for CDR_QDATA_SBSPACE:-------------------------------------------name/addr number used free total %full pathname0xda3d088 2 1 4586 4587 0 ./replsbs replsbs 2 1 4586 4587 0

0xda539c0 3 1 4586 4587 0 ./sbsp1 sbsp1 3 2 9172 9174 0


As you can see it prints the address of the chunk or name of the sbspace, chunk/space number, total pages, free pages,percentage used and path name for the chunks. The information printed is for each chunk in a sbspace followed bysbspace itself. In this case replsbs and sbsp1 are the sbspaces configured for CDR_QDATA_SBSPACE.

Suma Vinod[Read More]

ON-Bar whole system parallelize backup/restore

cheetahblog 1 Comment 4,913 Views

Prior to Cheetah,  a whole-system backup (onbar -b-w)  was serial backup of all storage spaces andlogical logs based on a single checkpoint. That time is stored with thebackup information. Theadvantage of using a whole-system backup was  that you canrestore the storage spaces with or without the logical logs.Because the data in all storage spaces is consistent in a whole-systembackup, you do not need to restore the logical logs to make the dataconsistent. Level 0, 1, or 2 backups are supported.

One of the new features of Cheetah is Backup/Restore dbspacesin parallel with whole system functionality .  With parallelbackup , multiple processes run simultaneously  each processbacking up a different dbspace . In most cases, parallel backups complete fasterthan serialbackups, which use only one process.

ONCONFIG parameter  "BAR_MAX_BACKUP" isused for whole system backups/restores as well. TheBAR_MAX_BACKUP parameter specifies the maximum number of parallelprocesses that are allowed for each onbar command. Both UNIX andWindows support parallel backups. Although the database server defaultvalue for BAR_MAX_BACKUP is 4, the onconfig.std value is 0.

To specify parallel backups and restores, including parallel wholesystem backups and restores, set BAR_MAX_BACKUP to a value higher than1. For example, if you set BAR_MAX_BACKUP to 3 and execute anON–Bar command, ON–Bar will spawn the maximum 3 processes concurrently.

If  BAR_MAX_BACKUP to 0, the system creates as manyON–Bar processes as needed. The number of ON–Barprocesses is limited only by the number of storage spaces or the amountof memory available to the database server, whichever is less.

To perform a serial backup or restore, including a serialwhole system backup or restore, set BAR_MAX_BACKUP to 1. 

Also in Cheetah,  storage manager efficiency isimproved  by improvingdbspace ordering .  "Biggest" dbspaces are backed up first and the "smallest"last.Dbspace with most pages used is the "biggest" and will be backed upfirst. This ensures better parallelism, no matter how BAR_MAX_BACKUP isset or how many pages are to be backed up in different dbspaces. Neworder of backup/restore is also effective for non-wholesystembackup/restore. Restore is done in the same order as backup to ease storagemanager's  access to stored objects.

To implement new ordering of dbspaces for backup and restore changes are made to the bar_instance table in the sysutils Database .
New column "ins_backup_order" (integer default 0 not null) is added to the table bar_instance .
 
Also changes are made to ixbar file :
New field with backup sequence integer (Last field in the line ) is added in ixbar file.
Value of the new field is always "0" (zero) for log backup objects.

Even if the BAR_MAX_BACKUP is set to 1 for serial backup,the dbspaces will be backed up using the new dbspace order .

§
Backup Operation
=================
Only 1 checkpoint for all dbspaces - just before backup of rootdbs .
Rootdbs is backed up first, without parallelism (no change here).
Before image processor threads "arcbackup2" are started at this time, one for each dbspace  (more threads running in parallel.)
As each dbspace backup completes, the respective "arcbackup2" thread exits ( less "arcbackup2" threads as backup progresses. )
New order of backup. The order is based on the used-pages count at the start time of backup. Dbspace with most pages used is the "biggest" and will be backed up first.

Restore Operation
================
Not much changed, only it is now parallel.
Rootdbs is restored first, without parallelism (no change here).
Restore dbspaces in same order as they were backed up. Use the value of "ins_backup_order" to determine correct order. If BAR_MAX_BACKUP is changed between backup and restore, the objects will still be restored in the same order, however the timing relative to each other may differ significantly. This could have negative effects on SM performance .

Rashmi Chawak[Read More]

Tracking object state changes within Enterprise Replication (ER)

cheetahblog Tags:  cdr_features track logcommands 4,898 Views

Have you ever run into a situation where certain ER administrative commands executed and that caused ER to fail? Without knowing the exact sequence of activities, it would be very difficult to identify the actual commands that caused the problem. Unfortunately, not all ER commands generate alerts or update message log. If you did not use a script to run ER administrative commands, there would be no easy way to track the sequence of activity caused the failure. 

 

An undocumented mechanism could be use for track ER object state changes. Once you enable this tracking mechanism, Informix server generates information about ER administrative operations in server message log file with full command line arguments, which could help analyzing ER issues. By default it would be disable. To enable the tracking either use –


  • Add the following entry to the ONCONFIG configuration file and restart the Informix server.

                 CDR_FEATURES   LOGCOMMANDS

                 OR

  • Dynamically change the configuration settings with following command:

            cdr change onconfig "CDR_FEATURES LOGCOMMANDS"

For example, if you run 'cdr define server --ats=/test/data2 --ris=/test/data2 --connect cdr2 --init g_cdr2 --sync g_cdr1' to define a replication server that would generate following information in message log: 

10:23:12 CDR remote administrative operation 'define serv -A /test/data2 -R /test/data2 -S g_cdr1 -I g_cdr2' execution succeeded.

 

-Sanjit Chakraborty

How to monitor your SQL Shared Memory Caches

cheetahblog Tags:  performance cache 4,853 Views


I don’t know about the rest of you, but I have had a lot ofproblems truly determining if my SQL Cache settings are efficient. If my UserDefined Routine (UDR) cache, for example, always seems to be 50% full, does that mean thecache only uses 50% and so the settings are to high, or are they at 50% becauseI’m constantly cleaning the pools, and thus I’m configured too low. Beginning in Cheetah you now have a means to diagnose thattype of information, and it’s located in the sysmaster database. The table name is syssqlcacheprof, and it contains profileinformation for each of the caches.



 



Below is a sample output from this table:



 





 



 



As you can see in the above you can now quickly identify how often the cache is removing entries (orcleaning), and what the efficiency is of your cache, in terms of a hit rate. Infact a very simple query that could be used for analyzing the hit ratio for the caches would be:



 



select *, (hits/(hits+misses)) hit_ratio from syssqlcacheprof



 



This table also provides youthe benefit of quickly seeing how much memory each cache is actually using,something that before Cheetah was a bit awkward to calculate. While not a major feature of Cheetah, this new sysmaster table is a valuable new asset in performance tuning.




Mark Jamison[Read More]

Simply Powerful and Flexible – IBM Informix on Cloud

Pradeep Natarajan 4,788 Views

In this latest article published today in the IBM Data Management blog, Pradeep Muthalpuredathe, Senior Manager for Informix Engineering & Support shares his thoughts about the Informix on Cloud offering. Get started with this powerful offering today.

Also check out the newly redesigned landing page for IBM Informix! All the IBM Informix resources you need are in one place - product offerings, select customer references, videos, demos, white papers, code samples, and downloads.

On-Disk Encryption available in 12.10.xC8

Pradeep Natarajan 4,775 Views

Hear Ye! Hear Ye!Have you heard? IBM Informix 12.10.xC8 is publicly available for download! The most significant update to the product in this release includes on-disk encryption, also known as Encryption At Rest (EAR). The EAR feature allows you to encrypt the data at the lowest level, on the disk. Now you can rest easy, knowing that your data is encrypted and secure in the event of a physical disk loss or theft. In a world where low cost computers are physically accessible, increasingly used for temporary storage, aggregation, and analytics of data, it is vital to keep the data secure. You can start encrypting your storage spaces by upgrading to 12.10.xC8.

Here is a nice blog written by our architect@Shawn Moe (smoe) to help you get started. ​For complete information on this feature, please refer to our documentation page

New Beta Drop available

cheetahblog Tags:  cheetah 4,769 Views
The latest Beta drop of IDS Cheetah is now available, with some important new features integrated into the code line..

New features available in Beta drop 5 include:

  • AIX 64-bit platform build available for download
  • Web Feature Service for Spatial Data - Delivers spatial information as a web service for your Service Oriented Architecture
  • Label Based Access Control (LBAC) - Control data access at a row and column level using security labels
  • Continuous Availability with multiple HDR remote secondary servers

The following Infocenter page has a complete list of new features for each Beta drop: http://publib.boulder.ibm.com/infocenter/idshelp/v111/index.jsp?topic=/com.ibm.po.doc/newfeatures_11.10_beta.html.

Guy Bowerman[Read More]

Improved Statistics Maintenance

cheetahblog Tags:  explain sampling statistics 4,727 Views
explain
  • Automaticstatistics collection.

IDS now automatically collects index statistics, equivalent to thestatistics gathered by UPDATE STATISTICS in LOW mode, when you create aB-tree index on a UDT column of an existing table or if you create afunctional or VII index on a column of an existing table. Statisticsthat are collected automatically by this feature are available to thequery optimizer, removing the necessity to manually run UPDATESTATISTICS. When B-tree indexes are created, column statistics arecollected on the first index column, equivalent to what UPDATESTATISTICS generates in MEDIUM mode, with a resolution of 1% for tablesof fewer than a million rows, and 0.5% for larger tables.

  • Query statistics in EXPLAIN output.

You can now view statistics about completed queries in the new Querystatistics section in SET EXPLAIN.  The  querystatistics section of the output gives a detailed information on eachscan, join and sort statistics such as estimated cost, number of rowsscanned, estimated number of rows and number of rows produced at eachlevel.  The EXPLAIN_STAT configurationparameter enables or disables the inclusionof a Query Statistics section in the explainoutput file. You can generatethe output file by using either the SETEXPLAINstatement of SQL or the onmode-Y sessionid command. Following is an example.




  • New SAMPLING SIZE option in UPDATE STATISTICS MEDIUM mode.

In explicit UPDATE STATISTICS operations in MEDIUM mode, a new SAMPLINGSIZE option in the Resolution clause can specify the minimum number ofrows to sample for column distributions.

If the Resolution clause omits the RESOLUTION keyword and specifies noconfidence level and no percent value, then the number of rows thatDynamic Server samples will be the larger of the following two values:

    * The min value that you specifyimmediately after the SAMPLING SIZE keywords
    * The sampling size that is required forthe default percent of rows in each bin (2.5%) and for the minimumconfidence level (0.80).

If a sampling size is specified in a Resolution clause that includesexplicit values for both the average percent of sampled rows per binand for the confidence level, then the number of sampled rows will bethe larger of these two values:

    * The min value that you specifyimmediately after the SAMPLING SIZE keywords
    * The sampling size that is required forthe specified percent of rows and for the specified confidence level.

If a sampling size is specified in a Resolution clause that includes anaverage percentage value but sets no confidence level, then the minimumconfidence value of 0.80 is used to calculate the actual sampling sizefor Dynamic Server to use if the specified size is smaller.

For example, the following statement calculates statistics for twocolumns of the customer table, without updating index information. Atleast 200 rows will be sampled, but the actual size of the sample mightbe larger than 200 if more rows are required to provide the default0.80 confidence level for a sample distribution that uses 50equivalence categories, with an average percentage of 2% of the sampledvalues in each bin.

UPDATESTATISTICS MEDIUM FOR TABLE customer (city, state)
  SAMPLING SIZE 200 RESOLUTION 2 DISTRIBUTIONS ONLY;

Whether or not you include an explicit SAMPLING SIZE specification inthe Resolution clause, Dynamic Server records in the system catalog theactual sampling size (as a percentage of the total number of rows inthe table) at the time of MEDIUM mode UPDATE STATISTICS creation.

  • New FILETO option to redirect  SET EXPLAIN output to a filethat you desire.

When you execute a SET EXPLAIN FILE TO statement, explain output isimplicitly turned on. The default filename for the output issqexplain.out until changed by a SET EXPLAIN FILE TO statement. Oncechanged, the filename remains set until the end of the session or untilit is changed by another SET EXPLAIN FILE TO statement.

The filename can be any valid combination of optional path andfilename. If no path component is specified, the file is placed in yourcurrent directory. The permissions for the file are owned by thecurrent user.  The output file that you specify in the SETEXPLAIN statement can be a new file or an existing file. In DynamicServer, if the FILE TO clause specifies an existing file, the newoutput is appended to that file.  Following is an examplewhere the explain output is sent to file "explain.vijay" under "/tmp"directory.

SETEXPLAIN FILE TO '/tmp/explain.vijay'

These features reduce the need for DBAs to perform UPDATE STATISTICS onindexes, enabling better query plans and ease of use.


Vijay Lolabattu
[Read More]

Dormant, no more!

Pradeep Natarajan 4,720 Views

image

[Image source: livescience.com]

First, apologies to those who have been following this blog regularly in the past! Unfortunately, this blog had remained dormant for far too long. Make no mistake, it was just the blog that had been dormant, and neither the product nor the experts behind it. Our team has been busy enhancing our beloved IBM Informix product as well as producing content for the user community. As evidence, you will see a series of posts erupting here that point to various blogs, presentations, sample applications, and tutorials available in the public domain.

It is time to revive this blog and provide an avenue for our experts to blanket the user community with Informix wisdom!

Modified on by Pradeep Natarajan

IBM OpenAdmin Tool for IDS -- Version 2.20 – Available Now!!

cheetahblog Tags:  idsadmin openadmin oat 4,616 Views

OpenAdmin Tool for IDS has been greatly enhanced in version 2.20 with a completely redesigned user interface, a new automated installer, and lots of new IDS admin functionality. OpenAdmin Tool for IDS (OAT) is a PHP-based Web browser administration tool for IDS 11 and IDS 11.5 that provides the ability to administer multiple database server instances from a single location. OAT makes administration easy by allowing you to drill down on resource usage and events, view query performance statistics, and much more. And since the tool is written in PHP and available as open source, you can customize it with your own business logic and installation requirements.



New feature highlights of OpenAdmin Tool for IDS version 2.20 include:



  • Automated Installer: For users who want an easy, hassle-free install, there is a new automated GUI installer that will install and configure Apache, PHP, I-Connect, and OAT. For users who want to use or install their own Apache/PHP configurations, OAT is also released as a stand-alone package.


  • All-New Look: OAT version 2.20 has a completely new and improved UI


  • Automated Update Statistics: Configure rules and define a schedule to have IDS automatically maintain your database server's statistics


  • Onconfig Recommendations: Get recommendations for your onconfig settings that are tuned to your specific database server instance; modify dynamic onconfig parameters directly through OAT


  • Enhanced Mach11 Support: Including a Connection Manager interface, an SDS Setup Wizard, and the ability to start and stop Mach11 servers remotely


  • Historical Performance Graphs: View and monitor historical trends of key performance measurements


  • Task Scheduler Wizard: Use OAT’s new wizard to define new tasks in the scheduler


  • Read-Only Access: Define groups in OAT that can monitor IDS database servers but not perform administrative tasks


  • And the list goes on…. Virtual Processor Administration, System Integrity Checks, a Privileges Manager, environment variable support on OAT connections, the ability to kill database sessions remotely through OAT, and more!






Download OAT version 2.20 now at https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?lang=en_US&source=swg-informixfpd, or at www.iiug.org in the Members Area.



Erika Von Bargen[Read More]

RENAME support in Enterprise Replication

cheetahblog Tags:  er rename replication 4,525 Views
From IDS 10.0x onwards, Enterprise Replication (ER) supports alteroperations on a replicated table while replication is active however, RENAMEsupport was not one of them but that changes in Cheetah. IDS 11.x willstart supporting RENAME on ER columns, tables and databases. This featuresimplifies DBA tasks and increases data availability. Without this feature, theDBA would have to plan on a time in which ER could be removed from the objectso that the rename could be performed. This would require advanceplanning and scheduling during non-peek hoursfor performing the rename.

When a RENAME operation is performedaffected replicate definitions will be updated to reflect the impact of theRENAME and a control message will be sent to the other servers informing themof the rename. RENAME operation is allowed onlyon a mastered replicate. It does not propagate the RENAME command itself, there are plans to implement that too inthe future. The user simply will issue a rename DDL statement on each of theservers that are affected by RENAME. If therename is a column or a table, then the replicate will be cycled (cdr stop/start replicate) in much thesame way that occurs with a table ALTER. If a database rename occurs,however, ER will be cycled (cdr stop/start). This cycling will only occur if the renameaffects something on the local node. In all cases, the rename will causea control message to be sent to the other servers within the replication domainso that the syscdr database is correctlyupdated.

Vijay Lolabattu[Read More]