This post tries to help the audience configure JSON listener in IBM Informix database using 3 simple steps.
NOSQL applications connects to the IBM Informix instances using the Json wire listener. The JSON wire listener is a Java program whose jar file is in $INFORMIXDIR/bin. The listener serves as a communication channel to and from an Informix instance and NoSQL applications.
Prepare the environment:
- DBSERVERALIAS should be added in the $ONCONFIG and $INFORMIXSQLHOSTS file, if its not already there.
- Corresponding port for this DBSERVERALIAS should be reserved in the /etc/services file, if not already there.
For example, this is what can be seen on my system,
From the $ONCONFIG file:
DBSERVERALIASES dr_informix1210_1, lo_informix1210_1
From the $INFORMIXSQLHOSTS file:
ol_informix1210_1 onsoctcp idcxs3 ol_informix1210_1
dr_informix1210_1 drsoctcp idcxs3 dr_informix1210_1
lo_informix1210_1 onsoctcp 127.0.0.1 lo_informix1210_1_1
From the /etc/services file:
ol_informix1210_json 27017/tcp #JSON listener for ol_informix1210
Create a *.properties file in the $INFORMIXDIR/etc folder.
Since a JSON wire listener is a Java application, it requires a connection properties file to connect to the instance.
• Default name for the file is jsonListener.properties
• Can be renamed if needed though the .properties suffix must remain unchanged
• It must be located in $INFORMIXDIR/etc
The *.properties file could contain several parameters to control the behavior of the wire listener. However to limit the scope of the post, the following 5 essential parameters should
For example, in this case the contents of the mongo.properties file is as follows:
It is higly recommended that the port number being used in the URL and the listener port for the NOSQL applications should be different. The wire listener uses the listener port to
listen to any incoming NOSQL connections, and uses the port in the URL to make a jdbc connection to the informix instance.
Start the Wire Listener
2 approaches could be taken to start the JSON wire listener
- command line execution using java
- SQL admin API from within Informix
Command line execution using java:
java -cp $INFORMIXDIR/bin/jsonListener.jar com.ibm.nosql.server.ListenerCLI -config mongo.properties -start
SQL admin API:
execute function [task | admin] (“start json listener” [,”property file”,] [listener arguments])
For example, in this case,
informix@node1:/> dbaccess - -
> database sysadmin;
> execute function task("start json listener","mongo.properties");
(expression) JSON listener launched. For status check /opt/IBM/informix/12.10_
1 row(s) retrieved.
The following video showcases the steps described above, and demonstrates how easily JSON listener can be configured in IBM Informix.
Disclaimer: The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions
Modified on by Prasanna A M
- Does your solution require Node.js functionality and you are in search of a IBM offering in this space?
- Are you looking for a compatible solution for IBM Power, Intel and z Systems products that require Node.js functionality and package management?
Then, IBM SDK for Node.js is the answer to your queries. The offering from IBM is based on the Node.js open source project and has a matching release for each release of Node.js product from the open source project.
IBM SDK for Node.js v4 is the latest version available for download, with v1.2 and v1.1 also being available for download from the Downloads section of IBM SDK for Node.js.
The following table gives a quick byte on the version comparison between IBM SDK for Node.js and the Node.js releases as shared by open source project team:
Note 1: For latest and updated information, please refer to the IBM SDK for Node.js home page.
Steps to Install IBM SDK for Node.js
The following section of the blog work takes you through the step by step process to install IBM SDK for Node.js v1.2 on a Windows platform. Considering the GUI installation mode, one should not see any difference in steps to install the build on a Linux / Unix OS
Step 1: Choosing one of the platforms listed below, download the IBM SDK for Node.js and save it in a directory of choice, on the file system.
Step 2: Run the EXE file as an Administrator
Step 3: Choose the language of choice. In this blog work, let's choose 'English' as the preferred language for Installation.
Step 4: The user shall be presented with 'Introduction' screen as shown below. Read through the guidelines and precautionary notes and click on 'Next' to continue.
Step 5: The user is now presented with Software License Agreement. Read through the General Terms, choose the radio button 'I accept the terms in the license agreement' and click on 'Next' to continue.
Step 6: User is now prompted to provide the location on the file system where the user would like to install IBM SDK for Node.js. By default, the product shall be installed under " C:\Program Files\IBM\node " on a 64 Bit Windows operating system, however, the user can make choice to decide otherwise, as shown in the following snapshot image.
Step 7: The next step refers to the locations where the user would like to have the product icons to be created and option to make the product available for other users of the operating system. Make the choices and choose 'Next' to continue
Step 8: Based on the choices made from Step 3 through to Step 7, the user is now presented with a Installation Summary for Review, before initiating the product installation by clicking on 'Install'.
Step 9: The installation of IBM SDK for Node.js takes only a couple of minutes before completing the installation.
Step 10: On successful installation of IBM SDK for Node.js, the user gets to see the following message on the installer
Confirm the availability of IBM SDK for Node.js on the Users machine
Once the installer has exited, the user needs to confirm the availability of IBM SDK for Node.js product, as part of the Windows Programs. This can be done by navigating through Start --> All Programs and then look out for the entry 'IBM SDK for Node.js (TM)', with the 'Node' utility and then 'Unistall' icon available under it, as it's entries.
The operating system is now set with IBM SDK for Node.js and the machine is ready to execute Node.js programs and applications. In the upcoming blog works, let's discuss on how run Node.js programs against IBM Informix Database Server, using communication protocols other than MongoDB API and REST API.
Stay tuned .....
Modified on by Prasanna A M
- Have you wondered about installing & configuring an Enterprise Class Database Server on a ARM or a Intel processor in an embedded platform?
- Have you looked around for detailed step by step process to embed IBM Informix on a ARM or a Intel processor?
- Have you heard about Internet of Things (IoT) Kit that facilitates application development in minutes on a embedded platform?
Take note of the Internet of Things (IoT) Kit. It is a executable script, that installs a set of components on a ARM & Intel devices, preparing the environment on a embedded platform, for application development within minutes. Major components that get installed by the IoT Kit are IBM Informix, Node.js & Node-Red, in a ready to use fashion.
The blog work on 'How To Install The IoT Kit' by a team of engineers @ IBM provide step by step process to Install the Internet of Things (IoT) Kit on ARM & Intel devices.
A 6 Minute video on YouTube titled 'Install & Configure IBM IoT Kit on Raspberry Pi 2' showcases a Do It Yourself (DIY) demo, with steps to Install and Configure IoT Kit on a Raspberry Pi 2 device enabled with ARM Cortex A6.
The above mentioned blog work and the YouTube video should enable you to go ahead and perform a Do It Yourself setup on Raspberry Pi / BeagleBone Black on a ARM or Intel device.
Modified on by Prasanna A M
IBM Informix facilitates deployment of Informix instances on 'n' number of target servers, by means of capturing Snapshot of Informix Instance, up and running on the source server.
Snapshot provides you a means of taking a backup of a Informix Instance in an 'As - Is' state and then deploy the same on multiple target servers. The advantages of deploying snapshot are:
You don't have to go through the process of Installation & Configuration on every target server ( Though, one can Install & Configure IBM Informix with minimal fuss, within few minutes)
Capturing the snapshot 'As - Is', facilitates capturing the schema and data of database(s) withing the Informix instance, at that moment.
Deployment of such a snapshot on target servers, brings up the instances to the same state as available on source server, significantly saving time and resources and brings the users to the speed, hardly missing a beat, as though, they are working on the source server.
IBM Informix provides following two utilities to effectively perform snapshot deployment:
ifxdeployassist : The Deployment Assistant (DA) is used to create a snapshot of an up & running Informix database server instance, with the Instance being in On-Line mode. This utility is used on the Source server.
ifxdeploy : The Deployment Utility (DU) makes use of the snapshot as created by the DA on the source server and deploys it on the target server. It optionally refers to the 'ifxdeploy.conf' configuration file to configure the parameters on target Informix instance.
The following section of the blog briefs you on the step by step procedure to effectively create a snapshot using the DA and then deploy it on the target server making use of the DU.
Creating Informix Instance Snapshot using Deployment Assistant:
Choose a Informix Instance to be your choice of source server. Ensure the Informix instance is up and running ( verify the output of “onstat –“ command ). Configure the instance and fine tune the settings for performance, as needed.
If the target environment is familiar, then Identify the names of the users who shall be accessing the server and then GRANT necessary privileges, against the DEMODB, SYSADMIN and SYSUSERS databases, else, if you are not familiar with target environment, then plan to grant privileges to the ‘public’, which relates to any user accessing the machine:
DBACCESS --> DEMODB --> SQL EDITOR
GRANT DBA TO INFORMIX;
GRANT DBA TO ADMINISTRATOR;
GRANT DBA TO PUBLIC;
GRANT DBA TO <USERS>;
Once the privileges have been granted, we are good to take a snap shot
Decide on the archive file format, to have the snap shot on, i.e ZIP, TAR, GZIP, etc
Execute the command
Windows : ifxdeployassist.exe –a zip –d
Linux / Unix : ifxdeployassist –a tar –d
Here, flag ‘a’ defines the target archive file format and flag ‘d’ represents that data also needs to be picked up as part of the snap shot
The deployment assistant pop-up window shall appear on the screen. Follow the instructions on the window to complete the activity of taking a Informix Instance Snapshot
Click on Finish to complete the snapshot activity and exit the pop up window
Verify that the snap shot creation activity has created two files, one with the Instance details and the other one with the dbspaces details ( *_db extension )
Copy the ‘ifxdeploy.exe’ file ( On Windows) or ‘ifxdeploy’ file ( On Linux / Unix ) found under %INFORMIXDIR%/bin, ‘ifxdeploy.conf’ file found under %INFORMIXDIR%/etc, along with the snapshot files obtained in Step 8, to a directory location on the Target server
Deploy Informix Instance Snapshot using Deployment Utility (DU)
On the Target server, confirm that you have received the ‘ifxdeploy’, ‘ifxdeploy.conf’ file and the snap shot files, intact, without any corruptions
Ensure, the user on the server has Read / access privileges to the files identified in Step 10
Extract the contents of one of the two snap shot files, the one that has the DBSpaces and is identified with the file format “*_db”
Now edit the ‘ifxdeploy.conf’ file, to update the Target Parameter values, to suit to the target deployment:
User needs to update the ‘ifxdeploy.conf’ file, to update various parameters like:
INFORMIXSERVER : Name of the Informix instance ( Same name as that of the Instance on source server, else, provide a custom name)
PROTOCOL1 : Type of Communication protocol to be used. Retain same as source server
SQLIPORT : Port Number. Retain same as source server or provide unique port number on the target server
DRDAPORT : If DRDA is available and needed
SERVERNUM : Unique server number. Retain same as source server or provide a unique number
INFORMIXSQLHOSTS : Name of the Informix SQLHOSTS file. Retain same as source server
INFORMIXDIR : Directory Location on file system where Informix Snap shot utility shall deploy Informix on the target machine / server. This directory shouldn’t be created in advance, the utility shall take care of it.
ONCONFIG : Name of the Informix ONCONFIG file. Retain same as source server
START : Denotes the number of seconds after which, the utility starts the target Informix instance
SNAPSHOT : Absolute path to the Snap shot file, as taken on the source server. This is the file that holds all the binaries and executable that makes up the Informix Instance on the target machine.
RELOCATE : Location of dbspace chunks on the target server. Either provide the location of the directory where you have extracted the “ *_db” zip/tar file or move them to a directory location and provide that path here
INFORMIXPASSWORD : Password for user ‘informix’
SYSTEM : Set the value to ‘1’ to use Local System User account, else set it to ‘0’ for Informix
LOGFILE : Location of log file to capture the output of the snapshot deployment
ROOTPATH : Absolute Path of the ROOTDBS file, after it has been extracted on the target server
WIN6432 : Set to ‘1’, if the source and target servers are being ported from 32 bit to 64 bit
Refer the Attachment "ifxdeploy.conf" for further details and illustration on how to update the config file
Open a Command Prompt on the target server and set the environment for the following two parameters:
INFORMIXDIR : Location where Informix Snap shot utility shall deploy Informix on the target machine / server. This directory shouldn’t be created in advance, the utility shall take care of it.
INFORMIXSERVER : Name of the Informix instance ( Same name as that of the Instance on source server, else, provide a custom name)
Once the environment is set, execute the following command to initiate the Informix Deployment Utility to deploy the snapshot on the target machine
ifxdeploy.exe -f "F:\embed_demo\ol_informix1210_20141229_1210.zip" -config ifxdeploy.conf –verbose
" –f " : Refers to the physical location of the snapshot file
" –config " : Refers to the CONFIGURATION file that shall be referred by the snapshot deployment utility during deployment
Refer the attachment "deploy.bat", that illustrates how to quickly prepare a BATCH file ( for Windows ) or attachment "deploy" file that illustrates how to quickly prepare a Shell script ( For Linux / Unix ), to perform the deployment and ensure that the setup is at relevant location to startup the Informix engine )
The utility takes few minutes to complete the deployment and exits the execution after completion
Verify the successful completion of snapshot deployment by monitoring the status of the Informix instance on the target machine using the command "onstat –"
Please Click Here to download the attachments referred in this blog work.
IBM Informix Embeddability Deployment Tutorial
Modified on by Prasanna A M
IBM Informix provides variety of tools and utilities to help assist in performing data movement activities. Considering various factors such as size of the database, source location and target location, type of data, nature of environment, etc, the end user can make a choice of the right tool or utility to perform the data movement, either from one database to another, from one instance to another or from one machine to another, etc. Each set of tools and utilities defines their own set of scope for usage, are optimal and work best when used within the defined scope.
In the interest of time, the current blog work concentrates only on the DBEXPORT and DBIMPORT utilities and provides step by step instructions to perform data export from one source database and import the same on the target instance to create the exported database, while successfully negotiating the challenges of working with Schema owner / Table owner names, which are usually different for different machines / servers.
Export a Informix database using Informix DBEXPORT utility
Ensure that the Informix server is up and running (verify the output of " onstat - " command)
Make note of the DBSPACE names that are available on the setup or make note of the DBSpace name which hosts the database and all of its associated tables. Capture the output of the command " onstat -d " to a file, which shall show the list of DBSpaces available and filter it out to choose all or only relevant set of DBSpaces
At the Informix command prompt, execute the following command to export the contents of the said database:
dbexport -cq -nw <database_name>
Example: Assuming the name of the database is demo_db
dbexport -cq -nw demo_db
On successful completion, the DBExport operation would have created two files:
Verify their successful creation and availability
<database_name>.exp ( demo_db.exp as per the example assumption in Step 3)
Copy the <database_name>.exp file to the target computer / server
Import a Informix database using DBIMPORT utility
Ensure, the <database_name>.exp ( demo_db.exp as per the example assumption in Step 3) was correctly copied from source server to target server
Ensure the user has all the privileges to access and work with <database_name>.exp ( demo_db.exp as per the example assumption in Step 3) to perform DBImport operation
Create the DBSpaces to match the names of the DBSpaces that were available on the Source server. The physical location of these DBSpaces on the file system can differ from source to target and so are the size of the DBSpaces ( as long as the DBSpaces sizes are large enough to hold the database being imported). However, the DBSpace names mapping is mandatory. If the DBSpaces with same names are already available, then ignore creating them again. Just ensure, you have sufficient space to perform the Import operation
If the name of the system owner who is going to perform the DBImport operation is same as that of the source server, then, we can directly execute the DBImport command. Else, we need to edit the <database_name>.sql file ( demo_db.sql as per the example assumption in Step 3) to change the owner name or remove the owner details, so that, the DBImport command creates the database with new ownership details
If the Owner is different, then, Locate the <database_name>.sql file ( demo_db.sql as per the example assumption in Step 3) that is available within the <database_name>.exp ( demo_db.exp as per the example assumption in Step 3). Identify the entries where table owner details are provided and remove / delete them. The owner name will be prefixed to the table names, as shown in the following illustration:
create trigger "informix".se_uominstrig insert on "sde".st_units_of_measure
create table "sde".geometry_columns
create table "informix".sysbldobjects
Here, in the above illustration, we need to remove "informix". and "sde". entries, so that, all the tables and database components are created with new ownership details
Once the file is ready, we need to execute the following command to perform DBImport operation
dbimport -c <database_name> -l
dbimport -c demo_db -l
The flags used here are c ( c for cat ) and l ( l for lion )
Click Here to know more about IBM Informix DBExport and DBImport utilities
Modified on by KiranChallapalli
In case you didn't know already, Informix Warehouse Accelerator (IWA) is a state-of-the-art in-memory database (IMDB) that provides a Google-like experience for analytic queries. With its columnar storage capability and extreme scalability using clustering, it is a disruptive technology that provides “speed of thought” analytics to organizations' operational and historical data.
To give you a clearer picture, the traditional approach to providing analytics in an enterprise is to separate the transactional system (OLTP) from the data warehouse/data mart. The analytic queries are typically complex SQL that is accessing and joining millions or billions of rows. For such queries to return results in a reasonable time, a lot of tuning (think indexes, cubes, materialized views, etc) is required. Which again needs to be repeated over a period of time when the data changes. The beauty of IWA is that it provides access to this type of analytical data with an increase of speed by several orders of magnitude, all without the need for tuning and irrespective of changes to the data.
Apart from these, the availability of an ELT tool (SQW) and IBM Smart Analytics Optimizer (ISAO) studio for quick & easy data mart creation & loading along with IWA installable edition makes it even easier for organizations. They neither have to worry about data loading and transformation, nor need warehousing experts to build & load data marts in IWA.
My first chance to really try out IWA in a real-world situation was when I worked on a proof of concept (PoC) showcasing IWA for an Indian stock trading company. They needed to gather stock market movement information, identify the trends, and accordingly advise their clients what stocks to purchase and what to sell. With their current setup they had a 24 hour lag. They would run analytical queries that would run overnight to create summary tables. And then, for business to understand the trends, analytical queries would have to be run against these summary tables. This caused multi-fold problems.
Long query processing times (summary table creation time + analytical queries against these summary tables)
Ad-hoc access to data unavailable
Any change request from business was difficult to implement and time consuming
If summary table creation failed overnight, it caused a 24 hour delay
Apart from that, they had a single system setup that caused scaling up (customer expects data size to increase manifold over the years) a cumbersome and expensive proposition.
When we ran the PoC showcasing IWA, we could show how they could (for one) eliminate need for the summary tables and run analytical queries against near-real-time data, and which performed up to 35 times faster while using only 1/4th of the system resources. Also, with the cluster support we showed how scalability is not an issue anymore.
In another PoC, the customer was an India based healthcare company that wanted to run analytics to best understand the usage patterns of its customers (i.e. patients). Here we saw even more spectacular comparative numbers. Queries in IWA ran at up to 1200 times faster while using only 1/4th resources. In one instance, a query that ran for 1 hour on their current system, was able to run (on IWA) in 3 seconds!
Leveraging the spectacular successes of these two PoCs, we have been able to reach out to more customers. We expect a few more such proofs of concept lined up, keeping me busy and learning more on the technology. The most gratifying part is to be able to run ad-hoc queries performing 100x to 1000x times faster than the competition and watching the pure amazement on the faces of the customers – especially the technology guys. Priceless!
Modified on by KiranChallapalli
Last time I posted my thoughts about IoT and how embedded database plays a crucial role.
One excellent example of how IoT improves our life is the heart monitor with implanted sensors. The monitor device can store information on the embedded database and use that to analyze to decide if and when to alert emergency services using the mobile network. But, there is only so much information that an embedded database can store. You would ideally want to store the person's medical history and other information in a central repository. What better technology than a cloud based database to be able to connect to remotely and store information. This cloud DB could also be used to run analytics about a particular person or even a number of persons in that area with similar condition to (maybe) allot emergency resources accordingly.
Some of the advantages of cloud based systems is lower setup cost, faster time to market, flexibility, scalability, accessibility, etc. But, one main risk factor for any Cloud system is security. Enter Informix!
IBM Informix database software incorporates design concepts that are suited to challenges in IoT, resulting in extremely high levels of performance and availability, distinctive capabilities in data replication and scalability, and no administrative overhead. It is renowned for its near hands-free administration (a valuable feature especially for a Cloud based system). Added to that is its specialized high performance support for time series and geo spatial data, and with its support for JSON/BSON datatypes it can now store varying types of data types without worrying about constant schema changes.
Informix in the Cloud is available in a number of offerings – IBM Softlayer Cloud, IBM Bluemix and Amazon Cloud.
One of the largest freight railroad companies uses a cloud based setup for maintenance of its fleet. Data (including video) is collected from the various sensors and detectors, consolidated in Informix database and then pushed to the cloud for online analytical processing.
To conclude, a Cloud based database is an excellent value addition in an IoT setup in a number of instances and IBM Informix with its inherent features is a perfect fit. For more information on Informix for Cloud Computing, go to http://www.ibm.com/informix/private-cloud.
Modified on by KiranChallapalli
What is IoT? What is this Internet of Things? If you look it up, there are a number of definitions. Here are a couple
"Uniquely identifiable objects and their virtual representations in an internet-like structure" - Wikipedia
"The Internet of Things represents an evolution in which objects are capable of interacting with other objects. Hospitals can monitor and regulate pacemakers long distance, factories can automatically address production line issues and hotels can adjust temperature and lighting according to a guest's preferences, to name just a few examples." - IBM
If I wanted to venture out with a definition, I would put it as: "Physical things enabled for digital identification and connected to other things such that they can exchange data to serve a purpose, all without human interference". The connection itself might or might not be using the internet, but the key thing to note here is the LACK of human interference. To step back a little bit, we have had machines "talking" to each other for years now. Computers, smart phones, tablets, etc. But, if you notice, they are almost always communicating at the behest of a human being. The IoT proposes to take it one step further where machines can generate, transfer, analyze and act on that data.
Let us consider an example. Cars have had sensors of various kinds generating a lot of data but most of it was never used. But, now the US Govt is considering mandating all automobiles sold in the US by 2020 to have capability to “recognize and talk” to each other while on a highway to avoid accidents. While you obviously need a number of sensors that can detect another car, you also need the capability to store that information so it can be used to act appropriately in real time. That is where an embedded database comes into the picture.
Today a typical embedded device is great for collecting and forwarding data for a single device, but in an IoT world, there will be many devices and customers might want a consolidated view of all of them. To continue with the car example, imagine a number of sensors sending information about the car behind it – one that sends a picture of the car behind it, a second that sends the GPS coordinates, another distance info (maybe via sonar-like detection), etc. All these are generating different types of data. You want your car to be able to intelligently consolidate all of it to understand whether the car behind is a threat to it.
Also, these devices generate a huge amount of data and in typical embedded devices collection space is limited. The variety and volume of data make it a Big Data problem.
Having an enterprise class database embedded in that car would be helpful in handling all the above requirements and also makes it compatible with your central repository which maybe is in a cloud.
Informix database software incorporates design concepts that are uniquely suited to the challenges in today's embedded devices, resulting in extremely high levels of performance and availability, distinctive capabilities in data replication and scalability, with no administrative overhead. Some major features are:
Easily embedded in a device with an install footprint and memory requirement as low as 64 MB.
Has built-in support for time series and spatial/GIS data.
Has analytics built into the database.
Supports JSON/BSON and SQL apps simultaneously in the same database allowing to handle a variety of data sources.
Easily scales-out across multiple devices.
Has built-in autonomics with self healing, self configuration and automation with the DB scheduler.
Embedding Informix in consolidation devices on the edge of the network of IoT allows for:
Complex store and forward capabilities with transformation and aggregation of data
Business decisions made on the edge, closer to the producer of the data
A number of customers already recognize the value add that embedded Informix brings to the table. Especially from an IoT perspective, Shaspa has chosen its devices to be powered by embedded Informix.
Shaspa is a European company that is into providing Smart Solutions. They built a Smart Home Solution gateway device to manage and interact with Security, Entertainment, Health, Voice and Gesture recognition and that connects TVs, computers, mobile devices to smart meters, lights, appliances, plugs and sensors. The solutions portfolio of the device allows immediate responses to local events to analyze larger trends and insights into costumer behavior.
To conclude, the embedded database is very critical for IoT devices in a number of instances. IBM Informix is a leader in that space and a perfect fit for processing data from IoT devices. For more information on Informix Embedded Database, go to http://www.ibm.com/informix/embed.
Modified on by KiranChallapalli
A customized Informix three day boot camp was conducted in Mumbai from February 25 to 27, 2014. This is the second Informix boot camp this quarter.
Day one covered an introduction to Informix, day two was about Informix TimeSeries and day three covered Informix Warehouse Accelerator (IWA). There were twenty attendees from various companies (including customers and business partners) and with a good mix of roles - architect, DBA, application developer, etc.
The sessions included presentations followed by hands-on workshops that allowed them to immediately implement what they learned. Ravi Vijay handled days one and two and yours truly delivered the IWA topics.
At the end of day three, the participants were given the opportunity to take free certification tests, and 14 of them became certified Informix professionals. Throughout the sessions there was very good participation which were handled in an interactive mode. Based on the boot camp feedback, it was a success with the participants asking for more such events.
Here are a couple pictures we took during the sessions.
Modified on by Prasanna A M
The year 2014 has certainly started on a bright note. We have been on our toes with Customer and Partner engagements. We have just stepped into the 3rd month of the year and we've already concluded three Informix Bootcamps, 2 in India and 1 in Sri Lanka. The enthusiasm shown by Informix users to join the Chat with Labs sessions, scheduled once in a month, has been good as well. Now that we've had two sessions already in the first two months, we are now gearing up for the 3rd one, to be scheduled in the 3rd week of March, 2014.
One of our Business Partner in Sri Lanka had expressed interest on having an Informix Bootcamp scheduled in Colombo, considering the traction he's been seeing around the business. With registrations coming in from one of the high profile electricity company, a large telecom provider, a large retailer, a private bank and a national bank, we eagerly scheduled a 5 - Day bootcamp in Colombo, where 2 days were used to cover the Administration and the remaining 3 days were used to cover the Performance Tuning. The bootcamp was organized from 24th of Feb, 2014 to 28th of Feb, 2014.
The sessions were interesting and the room full of audience were able to correlate to their work environment, along with use case scenarios. Time and again they were sharing their current implementation methods, the reason why they chose that specific implementation model and discussed on ways to improvise with the existing setup. The facts and figures of the Informix Cluster implementation by one of the major telecom provider, impressed me. Their OLTP environment is playing with close to 6 TB of data and the transactions fill up logs worth over 20 GB size per day. Another high profile electricity company mentioned that it has plans to automate the Meter Reading and would be looking at the prospect of using Informix TimeSeries technology to handle the Meter Data Management.
I also had the privilege of visiting one of the largest retailer in Sri Lanka who has been on the list of long time Informix users. They have developed applications using Informix 4GL and have deployed them on Informix DB in the backed, for over a decade now. I had heard that they have successfully implemented Vehicle Tracking System and was curious to know more on it's implementation. They have used Informix TimeSeries technology and have integrated with Google API's to track the current location of the vehicles on the map. That feature have enabled their management to make business decisions on the fly and minimize the business investments / expenditures.
IBM Informix inspires Informix enthusiasts ( new and existing customers / partners alike) with new set of features in every release, reinforcing it's position as the next generation database server and the Informix enthusiasts inspire the Informix Development team by showcasing the implementation of Informix features in innovative ways, bringing in value to the product and services across the world.
Couple of Snapshots taken during the Informix Bootcamp @ Colombo, Sri Lanka. Thanks to our Business Partner.
Modified on by Prasanna A M
I have often been hearing queries from customers, that, they already have several instances of Informix running in their organization and they would like to evaluate the Flexible Grid feature that eases their administration efforts and provides them the luxury of performing DDL and DML operations from a single node.
This short technical snippet is intended for users who are willing to take the advantages offered by the Informix Flexible Grid feature. And this content can also be referred by users who already have Informix Cluster setup or have Replication topology in place and would now like to evaluate the use of Informix Flexible Grid.
To start with, the basic requirement to create a Grid is to have Informix servers to be participating in ER activity, before they are pulled into the Grid as Grid servers. The technical content offers step by step snap shot to help users define a Informix server as an ER server and then start working on creating a Grid and Regions within the Grid, based on the needs and requirements.
Requirement 1: List out the Informix servers that are scheduled to be defined as ER server and then shall be further participating in the Grid
Requirement 2: To define a server or set of servers as ER server(s), the user needs to manually update the SQLHOSTS file to add a GROUP entry and the SQLHOSTS entry for all the servers that are participating.
To illustrate, in the following example, I shall be considering the Informix servers ‘clp’, ‘prot4’ and ‘special_1’ to be defined as ER servers. So, in this regard, the SQLHOSTS file for all the three servers mentioned above, looks as follows:
Here, g_prot4, g_special_1, g_clp are the names of the GROUP and i=100, i=200 & i=300 stand for unique identifiers.
In short, all the SQLHOSTS file will have the same entries, i.e each SQLHOSTS file will have entries for the other two servers as well.
Requirement 3: The Informix servers those are shortlisted to be defined as ER servers should have connection entries in OpenAdmin Tool.
Step by Step process to define ER Server using OpenAdmin Tool
Step 1: Log on to OpenAdmin Tool
Step 2: Navigate to Replication à ER Domain page, as available on the left hand side menu, to add a Informix server or set of Informix servers to participate in the ER activity
Step 3: The user is prompted with the message to define the Informix server as an ER server. Press YES and continue.
Step 4: The user will be now asked to work through the ER Wizard to define the Informix server as an ER server.
Step 5: In the first step, the wizard prompts the user to choose the server that shall be defined as an ER server.
Choose a server and click on ‘NEXT’
Step 6: In the second step of the wizard, the user needs to choose to create a new ER domain and can prefer to have the node as ‘Root Node’, as shown below. If the server is the first node in the domain, then it shall be the Root Node, however, if the user adds it to an existing ER domain, then, the user can choose it to be either NonRoot Node or Leaf Node.
Step 7: In the third step of the wizard, the user is prompted to specify the SBSPACE ( for ER row data ) and DBSPACE ( ER Catalog information ). If the Informix server doesn’t have those entries already, then the user can create them, in this step.
Click NEXT to continue.
Step 8: In the fourth step of the wizard, which is the final one, the user is allowed to review the settings. Click on NEXT to define a new ER Server.
Step 9: The user should see the conclusion message “ Congratulations! The new ER server was defined successfully.
Repeat the steps from Step 2, to define further set of Informix servers, which are shortlisted to be part of the Flexible Grid.
Note: Defining an Informix server as an ER server doesn’t enable the replication or start the replication. It only means that the server(s) / node(s) are ready to be replication enabled.
IBM Informix has reinforced its position as a next-generation database and has been successfully adopted by enthusiastic customers, existing and new alike. Continuing in the lines of its legacy, Informix India User Group has re-evolved to suit the business needs of the ever changing user community.
IBM Informix proudly presents the Informix India User Group with a new face and added facets that shall empower you to network, collaborate and contribute on a global stage. Elevate yourself from a passive member to an active contributor and explore the whole new User Group to network across geographies. The User group is dedicated to learn, share and network on IBM Informix technology across geographies. The User group houses a comprehensive Wiki for end to end information on Informix, Forum to discuss on features and technologies, Blog sections tries to put together the blogs written on Informix, Files section hosts a set of publicly available files on Informix technology, Events and Activities encourage users to participate, share their insights and do much more.
Register Today in 4 simple steps in under 3 minutes to the Informix India User Group:
Step 1: Register to the IBM Developer Works (DW) space
Step 2: Open the Informix India User Group forum page
Step 3: Log-in ( If not ) to the DW space using the credentials created in Step 1
Step 4: Just click on the link ' Join this Community ' to become a privileged member of the Informix Africa User Group.
Modified on by 5B53_Pradeep_Muthalpuredathe
Hector Ayala interned with IBM Informix Development team during the Fall/Winter of 2013. His blog (http://informix.jfmiii.com/?p=317) describes his experience developing mobile/smart-device applications with Informix using the new (in Informix 12.10.xC2) hybrid (SQL & NoSQL) capabilities, as somebody with no prior knowledge of Informix.
Demonstrates how simple, flexible and powerful Informix is in delivering capabilities and value. Simply Powerful!
Modified on by Prasanna A M
IBM Informix introduced v11.70 in October, 2010. With the introduction of features, such as platform-independent Flexible Grid capable of performing rolling upgrades and propagating DDL, Informix Warehouse Accelerator’s ultra fast in-memory and columnar database technology to support star join queries and time-cyclic data, Storage Provisioning to automate storage allocation, deployment utilities for seamless installation and improved embed ability, new SQL compatibilities and Interoperability with popular open-source products, trusted context and row-level auditing support, IBM Informix v11.70 reinforced its position as a next-generation database and was successfully adopted enthusiastically by customers, existing and new alike.
Continuing its legacy, IBM Informix v12.10, released on 26th of March, 2013, focuses on the ever changing database usage trends, ever demanding customer requirements and ever expanding industry horizons. It justifies to be the low-cost small-footprint RDBMS, viable for most custom business applications, from small to enterprise level. Focusing on the core strengths of Informix, v12.10 has significantly enhanced the Warehouse and TimeSeries technologies to address the explosion of Big Data.
Five Eye-catching technologies that you need watch out for in IBM Informix v12.10 release are:-
Informix Warehouse -
Consistent with the industry trend towards usage of in-memory and columnar database technology, IWA uses the concept of a snapshot. It now introduces the ability to perform Partition Updates, Automatic Partition Refreshes and Trickle Feed, putting Informix IWA into the real-time data warehousing category. The time-series to IWA loading feature allows the seamless migration of data from one format to another.
While IWA supports a large amount of commonly used SQL for data warehousing, the Informix v12.10 release adds a whole class of accelerated SQL/OLAP ANSI-SQL functions. SQL/OLAP eases integration with BI tools. The OpenAdmin tool (OAT) is now equipped to support IWA, minimizing reliance on the Informix Smart Analytics Optimizer (ISAO).
Informix v12.10 introduces the TimeSeries Plug-in for Data Studio, effectively bridging the gap on technical expertise required to work with Informix TimeSeries. Loading of TimeSeries data sets has improved with new custom loader programs, controlled writing to containers, and reduced logging. Managing TimeSeries has been simplified with Rolling window containers and the ability to Replicate TimeSeries data sets across the high-availability cluster and Enterprise Replication environment.
Processing of time series data is unique to Informix. The TimeSeries feature is implemented as a data type in a unique way to provide attractive performance and storage characteristics to manage and analyze sets of time-stamped data. Integration of TimeSeries and IWA eases its way into the integrated, interconnected business requirements, demonstrating its capabilities in collecting, managing and help perform complex analysis on the growing Big Data.
Informix has pioneered the art of data replication in extremely demanding conditions, providing excellent performance without burdening the source database, requiring few or no application changes, maximizing system availability and ensuring consistent delivery of the data, and addressing the full range of business and application needs to use enterprise computing resources most effectively.
Informix v12.10 release has honed its high availability and replication technology. Replication is now supported on servers with different owners. Setting up a data consolidation model has been simplified. Monitoring improvements for replication queues and integration with autonomic storage provisioning significantly eases DBA responsibilities and reduces the potential for human error. A grid can be broken into meaningful subsets called Regions to ease administration and allow propagation of external files across the grid. High-availability cluster environments have stabilized application–database server communication against network failures. Informix Hypervisor allows you to quickly create preconfigured, optimized Informix instances in a cloud environment without having to install an operating system or the Informix Server.
Informix database server is associated with terms such as invisible, small footprint, self configuring and healing, hands-free administration, and many more. IBM Informix v12.10 improvements have raised the bar for embeddability. Compression technology has greatly improved and storage optimization is now automatic. Configuring an Informix server just got simpler. The new set of configuration parameters and their dynamic nature, setting the server environment variables in the local shell using a file and management of server connections on the windows operating system demonstrate the ease and simplicity to work with Informix server.
The SQL domain has been strengthened with enhancements to syntax options and relaxation of restrictions on many legacy syntax. Application compatibility has received a boost with the extension of SQL package, and support to other functionalities.
The new Primary Storage Manager performs Backup and Restore operations and helps embed parallel backup/recovery solutions with minimal efforts.
IBM Informix v12.10 bundles the web-based, platform-independent OpenAdmin Tool v3.11, that has a plethora of features that cater to application developers, administrators, and end users. OAT v3.11 comes with striking features, enhancing and enriching the look and feel, easing out the administration tasks to enable the monitoring of vital server information across the globe Enhancements to plug-ins result in deeper integration across features, benefiting the cause and business requirements. OAT accommodates the manageability functionalities of ISAO tool and provides intelligence to handle data warehousing.
OAT v3.11 includes a native iOS and Android app to provide database monitoring from a mobile device.
Informix Genero has improved the HTML5 theme for Genero Web Client (GWC) and added new templates for the Business Application Modeling (BAM) tool.
Modified on by Prasanna A M
Register - Collaborate - Author
Informix India User Group, as available on IBM's public domain, encourages one and all to share your experience and insights on Informix database technology on this public forum.
Are you looking for a blogging site to share your expertise on Informix ? Do you juggle between multiple sites to blog, publish and then publicize the your technical expertise ?
With Informix India User Group you now have the option to register to the IBM DW Space and request for a Blog. With the dedicated blog space assigned, you are empowered to share insights on Informix database technology, your experiences, tips & tricks encountered and much more. Catch the eyes of the reader with captivating title and crisp content. Make your point or statement in blog content, arrive at a point or open it to a discussion with an air of expectation. The blog content shouldn't necessarily run into pages as it can make reader loose interest.
I'll leave this blog section open to all the Informix Enthusiasts to pour in with their insights and experiences and help assist in evangelizing the Informix technology.