- Streaming data to Excel
- Easy setup for high-availability
- Resilient processing with the consistent region annotation
- Toolkits enhancements
Big data in motion
JacquesRoy 120000A2MS 2,265 Views
This has been in the works for quite a while but now it’s out!
This new version adds multiple interesting new features including:
Streaming data to Microsoft Excel makes it easy create user interfaces to get real-time feedback on what’s happening in addition to providing all the capabilities from Excel to do additional processing on the data received.
A lot has been done on the high-availability front. It is much easier to setup redundant administrative services and have them failover automatically when needed. In addition, there is no need for a DB2 database. Instead, Streams now relies on Zookeeper to preserve all the state information. Also,to continue to improve on high availability, Streams does not require a shared file system anymore.
There is a new feature that guarantees at least once processing a tuples within a region or a set of operators. It is easy to use. We simply have to add annotations that define the region and set a few parameters.
There has been enhancements to existing toolkits and addition of new ones such as support for Kafka in the messaging toolkit and the new HBase toolkit.
There is more to the new release of Streams. You can find the online documentation in the knowledge center at:
To get an idea of what’s new in this release, the a look at:
The Informix development team has put a lot of efforts over the last year or so to continue to improve the product capabilities.
We strongly believe that this new release will help everyone, customers and partners alike, address the challenges and changing needs of data management.
Will it be faster? Will it be easier to manage? Will it include new functionality? Will it be smarter to accommodate a smarter planet?
What about big data and analytic?
You're in for a treat! Here is the webcast information:
The New IBM Informix: It's Simply Powerful
Date: Tuesday, March 26, 2013
Time: 10:00 AM PDT
Don't miss it.
I dare add to this, to me, the new IBM Informix, it's simply wonderful!
I've been saying for quite a while now that smart meters represent BIG DATA and that Informix TimeSeries is the optimal solution for an operational data store.
We can complement the Informix capabilities with other IBM products. When it comes to real-time processing of huge amount of data. The IBM solution is InfoSphere Streams.
It happens that Streams can interface with Informix as a data source or as a target (sink).
If you want to know more in this area, go take a look at the new information added to the Smart Meter Central wiki on Streams.
Two pages were added. One on a quick overview of Steams (with a youtube video) and another on setting up the environment.
The exact pages URLs are:
The wiki URL of the welcome page is: https://www.ibm.com/developerworks/mydeveloperworks/wikis/home?lang=en#/wiki/Informix%20smart%20meter%20central/page/Welcome
Make sure to bookmark it.
More to come as we go deeper into BIG DATA!
JacquesRoy 120000A2MS 1,874 Views
I arrived in Vegas Sunday mid-afternoon. Already, the activities have been going on for a day and a half. The expo floor looks good with Informix demos at multiple locations including the blade server with Informix and the theater presentation showing, at least, the clustering capabilities that include SDS, HDR, RSS, and ER.
The evening reception was in two parts: one in the expo and a second one for specific section of the Information management portfolio.
This year I decided to stay at the Luxor, next to the Mandaly Bay. You can walt from one hotel to the other without going outside. To go from my room to the registration desk takes a little over 15 minutes. On my way, I passed 3 Starbucks. I guess a lot of attendees need that to go through the long hours we'll have this week.
JacquesRoy 120000A2MS 1,517 Views
The IOD conference is less than a week away. I received an email about a blog entry that lists all the book signings that will happen at IOD. A total of 10. I happen to be one of them.
I wrote a short book titled: "Informix Dynamic Server Application Development - Getting Started". It is a free book that will be available at the conference. My book signing session is as follows:
Tuesday 12:00 pm - 1:00 pm
Location: Mandalay Bay Registration Desk South
Since I'm giving up my lunch for this, please stop by and say hi. For more information on all the book signings at IOD, please see the following blog entry:
(Short URL: http://bit.ly/KB8zy)
I'm currently in Paris in the second week of a business trip. For a two-week trip it is pretty common to have some clothes laundered otherwise this makes for a lot of stuff to lug around.
I took a look at what was offered at my hotel: To launder one shirt (men), they charge 8.50 euros (around 12.37 US dollars). As I was leaving the hotel, I saw a hotel employee with a laundry bag in her hands. Looking at the size of the bag, I could just imagine the small fortune spent by the guest.
As I was walking to the IBM office, I passed a dry cleaner that advertized the cleaning and pressing of men shirts for 2.20 euro per shirt for 5 shirts. The price at the hotel was over 3.8 times that price. With a little knowledge a a 5 minute walk, the hotel guest could save a significant amount of money: for 5 shirts the price goes from 42.50 euros to 11 euros. For a company with a lot of employees that use that type of service, this can add up to significant savings.
Of course, that made me think of Informix. It is well known that IDS provides a high level of performance and scalability and require minimal resources for its administration. In some cases, one database administrator can manage thousands of instances. Of course it is much easier to go with a safe choice, use as much hardware as needed, and hire as many employees and consultants as the situation requires for the management of the environment and business application development. This is simply the cost of doing business...
It seems to me that with a little knowledge and a little effort, that cost of doing business could be greatly optimized.
JacquesRoy 120000A2MS 2,261 Views
I recently received a note about the IOD conference, October 25-29, at the Mandalay Bay in Las Vegas. If you register by August 31, you can get the early bird hotel rate!
Please go to the Conference Site to learn more about the IOD conference and register. Here are the top reasons provided to attend:
More on the conference later.
JacquesRoy 120000A2MS 1,441 Views
The general session started with an example of context computing and an interview with Captain Phillips.
All that was pretty exciting but what stole the show is the announcement of the partnership
Then I went on my way to attend Streams sessions talking about use cases.
The first one i attended is about a partner, Voci, that has a appliance that converts audio to text.
The next session was a panel of expert on geospatial analytics.
In the afternoon, I attended a session on the features of the new Streams beta that was announced last Friday.
I followed with a session on context computing used to counter fraud. I finished my day
The conference is winding down with the last day tomorrow.
JacquesRoy 120000A2MS 1,529 Views
Another full day.
It started at 7:00 with a breakfast meeting and was followed by a conference call.
"The Power of Now: Real-Time Analytics and IBM InfoSphere Streams"
My afternoon was taken by a Streams and text analytics lab.
I went back to the conference floor and had interesting conversations with many technical people
I'll be able to catch up on some Streams sessions Tomorrow. I can't wait to hear about some customer/partners stories
Also, I heard through the grapevine that there my be a big announcement at the general session.
JacquesRoy 120000A2MS 1,118 Views
After walking by 3 different Starbucks, I arrived at the conference breakfast hall.
Then it was time to attend the general session that started at 8:15.
Multiple speakers expanded on these themes.
I particularly likes the line: "Geospatial data will become analytics superfood".
There were many interesting sessions to choose from but because of multiple engagements, I only attended
There was so much, if you are not at the conference, you may want to look for InsightGo to be able to attend some general sessions remotely.
Now it's time to move on to Tuesday!
JacquesRoy 120000A2MS 1,443 Views
The event went as planned at the Mandalay Bay convention center with presentation on:
Many people attended and were engaged in the presentations. Overall a success.
The Insight conference officially started with the opening reception.
JacquesRoy 120000A2MS 1,455 Views
We're up and going.
The conference is still being setup but there are events happening this Saturday.
All sorts of other sessions are taking place in other areas of the Mandalay Bay convention center.
If you are already in Las Vegas for the Insight conference, this would be a good use of your time.
Finally, Sunday evening, the Insight conference officially starts with the Solution EXPO Grand Opening Reception
I'll post comments on the conference daily so, stay tuned!
JacquesRoy 120000A2MS 1,255 Views
We are barely more than two weeks away from the Insight conference.
As you know, Streams is excellent at providing real-time analytics. It can be used with other
It happens that I'll be participating in an IoT deep dive on Sunday October 26.
I'll be joining the main speakers:
The technical section is divided in three parts:
You can register for the event at: http://insight-deep-dive.eventbrite.com
Don't forget to come see me at Insight in my sessions and labs as well as a book signing
The book is: "The Power of Now: Real-Time Analytics and IBM InfoSphere Streams"
See you in Vegas!
JacquesRoy 120000A2MS 2,008 Views
Ok, this is probably not news to you but there is information you should know.
The Insight conference, formerly known as Information on Demand (IOD), is going on Oct 26-30.
For the week, I am particularly interested in the Streams sessions such as:
Just to name a few. I am involved in a few sessions:
The other exciting part for me is that I am coming out with a new book:
I am doing a book signing on Tuesday between 9:30 and 10:30.
The Insight conference provides many excellent learning opportunities on many subjects including Cloud, mobile/Social, security, analytics, and more.
It is also a great opportunity to network with experts from IBM, partners, and other customers.
A while back, I started reading a book called "Thinking, Fast and Slow" from Daniel Kahneman.
Daniel Kahneman is a professor of psychology who won a Nobel prize in economic.
I have to admit, I am not done reading it. I need more "plane" time
Today, I just want to relate some parts of chapter 14 where he put together a test to see how people would classify individuals
"Tom W is a high intelligence, although lacking is true creativity.
After reading the description, the subject was asked to figure out which field of study Tom was most likely in.
The description was actually designed so people should rank computer science among the best fitting
I laughed out loud when I read that part. I immediately though of one of my co-worker, Robert U., that
For those who read this blog, if you make corny jokes/puns and graduated in computer science rejoice.
The book is full of interesting information including the fact that even statisticians can misuse/misinterpret statistics.
"you dispose of a limited budget of attention that you can allocate to activities. . .
My conclusion: if someone tells you he/she's multitasking, they do trivial work.
JacquesRoy 120000A2MS 1,440 Views
When we talk about processing data in real time, it is easy to just write a program and be done with it.
A program is easy to write when it can process records sequentially. Once you reach the limit of this sequential processing, you start adding complexity that may represent the bulk of your work: You start by using multi-threading and eventually you need to also go to multi-processing to take advantage of multiple machines. It is much easier to use a framework to reduce those issues.
Still, a framework may give you the ability to distribute your processing but how easy is it to do? Now you want proper tools to assemble the many operations that you want to link together. Then, you also need to have the tools to easily identify bottlenecks so you can parallelize you operations. What about all the standard operations you would expect to be able to do?
This is where a platform comes in. It gives you the foundation for distributed processing but also gives you pre-built capabilities to interact with the outside world (files, message queues, databases, and so on) and also analytics so you don't have to reinvent the wheel.
JacquesRoy 120000A2MS 2,062 Views
InfoSphere Streams is starting to engage the open-source community to provide additional capabilities to its real-time analytics platform.
This is still very early in the process and we can assume we'll see evolve quickly. That may also be a way to consolidate
One of the projects is under the name resourceManagers.
Learn more about what is available for Streams on GitHub by looking at the newest page from the InfoSphere Streams playbook:
JacquesRoy 120000A2MS 1,944 Views
Anyone remembers this cartoon? I think the first time I saw it was in the '80s. Still, it keeps coming back.
This used to apply to IT requests. It can also be applied to all sort of things, including how quickly you want to go from data to actionable information.
Real-time analytics apply in many industries including medical, telecommunication, and security. You can find additional examples in the
There is a special need in processing machine data. The data can be generated at such a rate that we need machines to analyze all that data.
Data in motion processing is here to stay. It is a great approach to solve many business problems. Of course, this approach does not work in a vacuum.
The IBM solution for data in motion is InfoSphere Streams. You can download a free copy of the software to learn about it.
JacquesRoy 120000A2MS 1,687 Views
Do you know about IBM Data Magazine? It is the regular newsletter based on ibmdatamag.com that many people receive in their inbox
This online magazine contains articles related to: Big Data and Warehousing, Databases, Information Strategy, Integration and governance.
My first article got published on January 31st and is titled: "Getting the big data ball rolling".
I have put together a plan for a series of articles. When it gets more in depth, I will complement the articles with
Until next time...
JacquesRoy 120000A2MS 1,669 Views
I have to say, these are busy times!
With TimeSeries PoC and multiple activities around Streams, time flies by quickly.
It's been a while since I updated the InfoSphere Streams Playbook. This was overdue. There are new videos, training material and capabilities that were not reflected in the playbook. Here's what I updated:
With the end of the year so close, we can expect everyone to prepare for the new year. Looks like 2014 will be another fun year!
JacquesRoy 120000A2MS 2,063 Views
The other day I ran across an article on Infoworld.com: Cloudera pitches Hadoop for everything. Really?”
Of course, the article starts by mentioning the expression about hammers and nails. This is an old story and it appears that it is getting ready to repeat itself. Like it’s been said: “those who forget the past are doomed to repeat it”.
Hadoop has been the biggest star of the big data story. I have to say that it is revolutionizing data processing and for good reasons. Many seem to point to the use of cheap clusters based on commodity hardware. I personally prefer to attribute it to the large amount of data that has different requirements from traditional data processing.
JacquesRoy 120000A2MS 1,611 Views
There is now a new resource for Streams: https://www.ibmdw.net/streamsdev/
The Streamsdev site includes articles, blog entries, videos, and intro labs. You can also get to the download the latest quickstart edition of Streams from there. This way, you can download either the product or a vmware image with it and do the lab at your leisure.
This site is put together by developers for developers. Still, if you are new to InfoSphere Streams, you can find something there for you too. Just go to the getting started section under "Docs".
Since the IBM Information on Demand (IOD) conference starts this weekend, you can also find information on the activities (labs, presentation) on Streams during the conference. You can see the next few acticities on the mainpage or a more complete calendar under events.
This site is evolving. You should go look at it at least once a week to see what's new.
Hopefully many of you are going to the IOD conference next week. Enjoy the conference and learn a lot!
JacquesRoy 120000A2MS 1,479 Views
Last week, on October 22, IBM announced a new version of InfoSphere Streams: version 3.2.
The new version includes some nice improvements such as remote development, Rest API for data access, and improved toolkits.
If you are interested in trying Streams, IBM provides the quick start edition that you can download as native product or
Of course, you may need more information on how to use Streams. You can start by browsing through the InfoSphere Strreams Playbook at:
If you have questions, don't hesitate to drop me a note or comment on my blog entries.
Until next time!
JacquesRoy 120000A2MS 1,590 Views
If you've been following my blog over the last few years, you can notice a few things lately:
The significant part is really the name change. It went from "Informix and Computing" to "Big data in motion".
Let me first address the Informix part. Yes, I am still involved with Informix activities. In fact, I am currently working on a proof-of-concept for Informix TimeSeries that involves technologies such as Java, kafka, zookeeper, fastjson, messagePack, and more. So, Informix continues to be involved in "Big Data" and its use with other current technologies.
Will I continue to talk about Informix? Probably. It all depends if I believe I have something interesting to say on the subject. As long as I have activities with Informix I have opportunities to find interesting information.
Now. What about "Big data in motion"?
A while back I decided to go back to my old team: Worldwide Technical Sales and Enablement.
My main focus is now on InfoSphere Streams. This has already been an interesting ride. I've worked on multiple projects that include putting together an extensive training session, work on PoCs, writing DeveloperWorks articles, and more. I've even put together a DeveloperWorks wiki that centralizes all sort of resources related to InfoSphere Streams. I called it the InfoSphere Streams Playbook.
InfoSphere Streams is part of an overall "Big Data" architecture. There are many ties between Streams and the BigInsights platform and any other technologies that help getting big data under control. Yes, that includes Informix. It also includes many other technologies.
My focus may be mainly on "in-motion" data but the entire "Big Data" solution stack eventually interacts with it. That explains the new blog title.
As usual, I want to continue "casting a large net" so I can be free to talk about anything I find interesting.
So, drop me line, post comments. Let's continue a dialog that will help everyone (including me) learn new things and continue to have fun with our technological challenges.
JacquesRoy 120000A2MS 2,709 Views
A few years ago, IBM started talking about a smarter planet: Instrumented, interconnected, intelligent.
We are seeing more and more uses of sensors starting from your smart phone ant its many sensors (GPS, proximity, temperature, barometer, etc) to electric meters at your house. Add to that all the other sensors used in many industrial plants and even sensors on rails!
How can we convert this deluge of data into information?
This leads to issues related to two ways to handle data: in-motion and at-rest.
It happens that IBM has a mix of products that can handle these two "states" of the data:
For data in motion, we can use InfoSphere Streams for real-time analytics based on more in depth analysis on historical data (analytics models).
For the data at-rest, there are problems of how fast we can store it and how fast we can retrieve the information, specially when it concerns many users making requests. This would be an operational data store environment. Then, of course, there is the issue of "in-depth" analysis that requires fast access of large amount of data.
Informix has the combined solution with its TimeSeries capabilities and the Informix Warehouse Accelerator.
Learn more about the use of Informix to solve this big data problem in the following webcast:
The new Informix, version 12.10 was announced last week. It is time to start talking about the new features in TimeSeries.
The Informix team has added a public version of a fast loading mechanism. It allows to load into existing TimeSeries that are defined as part of a container.
This loader API was previously undocumented. It was only available to use as part of the Tooling. A lot of work went into it since its internal implementation. You should not try to use the older internal version since it disappears in 12.10 in favor of this new one.
You can find a description of its use in the "Informix Smart Meter Central" in the page Loading fastest with the loader API
You should also refer to the Informix documentation for more details.
Since the Loader API is an SQL API, it can be used by any clients including InfoSphere Streams.
For more information on how to use Streams with the loader api, please see the Informix Smart Meter Central wiki: Streams and the TimeSeries Loader API
More to come. Don't forget, the IIUG conference is just around the corner. This is the perfect place to learn about all the new features in Informix 12.10: Simply powerful.
We are seeing more and more interest in using both InfoSphere Streams and Informix together.
This is in the context of "Big Data".
InfoSphere Streams is a platform that allows you to add operators as you see fit.
In our case, there are already a few operators that can be used to read from or write to Informix from InfoSphere Streams.
There is a new DeveloperWorks article that describe how this could be done. With these basic examples you should be
able to integrate Informix in a Streams environment (or vice versa) in no time.
Here's the link to the article: Using InfoSphere Streams with Informix
I'm always looking for interesting information to stimulate my thinking.
My morning routine usually starts at around 5:30am and I use my tablet to look at news, blogs, tweets, and some web sites.
As part of the tweets I get, it includes some from a site called TED. I've talked about TED before. Take a look at my blog entry for January 2011: Happy new year!
In this blog entry, I recommended no less than four TED presentations.
For people that don't know TED, it is an organization that organizes conferences on all sorts of subjects. The presentations used to be have to be 17 minutes.
Now, you can find presentations that can also be much shorted. TED's tagline is: "Ideas worth spreading".
So, in the morning, I often check what's new on TED to see if there is something interesting to watch during breakfast (of course, when I have breakfast alone...).
I recently came across one that I thought was interesting considering everything we've been hearing over the last 4-5 years about the global economy.
Of course, the fact that it talks about complexity and emergence is just a bonus.
Here is the link to this presentation: Who controls the world?
Happy new year everyone!
The informix team is always hard at work improving the Informix products.
It turns out that, while working on V.next, a feature escaped and made it into version 11.70.xC5 and above (xC6 being the current release as of October 2012).
It concerns loading data into TimeSeries using a relational view of a TimeSeries (also known as VTI interface). To take advantage of this new feature, you simply
use the TS_VTI_ELEM_INSERT (128) flag when you create the relational view with the TSCreateVirtualTab() procedure.
A simple test showed that this feature loads data 3.6 times faster than previously. Of course, your "mileage" will vary depending on your environment. To know more on how you can
use this new feature, consult the following link from the Informix Smart Meter Central wiki:
You can find additional details in the Informix information center in the following pages:
Another year is coming to an end.
All in all, not a bad year. Informix released 11.70.xC5 and 11.70.xC6 while continuing to work on the next major version of the product. You can find the latest Informix release notes at: Informix 11.70 Information center
We continue to see more acceptance of features like IWA and TimeSeries. The Informix group also delivered many presentations and demo that the IIUG and the IOD conference. We can ad to that support for regional Informix users' group, new redbook, and so on.
Well... stay tuned. 2013 is lining up to be another good one for Informix. But what about ourselves. Are we improving over time like good wine or...
Here are some of my new year resolutions:
What about your new year resolutions?
For one, are you using the best Informix you could use? Resolve to upgrade to Informix 11.70.xC6 as soon as possible
There is a new redbook now available for people that want to get into the use of the TimeSeries feature.
It focuses on "how to" and nicely complement the Informix documentation.
The redbook can be found at: http://www.redbooks.ibm.com/abstracts/sg248021.html
Other resources to help you include:
JacquesRoy 120000A2MS 2,492 Views
We are now living in a world that is more and more instrumented, intelligent, and interconnected. That is actually the IBM definition of a smarter planet.
This opens the door to many possibilities to better use natural resources and improve many things.
Recently I ran into an interesting video on Ted called Tracking the trackers.
It is basically about how many different sites that you may have never visited can track you and your information and you can't say anything about it.
You can find this video at: Tracking the trackers
It is about a basically unregulated industry doing what is called "behavioral tracking". Apparently it is a $39B industry!
It is easy to jump from stealth tracking to security concerns: What is going on in your network?
Maybe it's time to review this URL: http://publib.boulder.ibm.com/infocenter/idshelp/v117/topic/com.ibm.sec.doc/SEC_wrapper.htm
JacquesRoy 120000A2MS 2,079 Views
This title could refer to a lot of things. What about countdown to summer vacation?
That's too far in the future. The countdown I am referring to is to the IIUG conference.
I'm really looking forward to seeing some old friends and find out what's been happening lately. I'm also looking forward to fantastic sessions.
What about The keynote sessions? They rock!
And those are only the keynote presentations. The conference has eight parallel tracks with six sessions each per day. Lots of learning to be done.
And let's not forget the parties!!!
See you there Sunday night!
First, let me put an end to the rumor that the IIUG conference was moved to San Diego to accommodate me.
It is true, I live in that area. It is also true that I am presenting my fair share of material but I can assure you that not even one passing thought on my location was part of the decision .
This being said, the conference is approaching quickly. One more week in March and then a few weeks in April and we're there.
As usual the conference organizers are trying to outdo themselves year after year. This year is no exception. What happened since last year?
For one, Informix 11.70.xC3 was just out then. Since we've seen xC4 come out. Can we hope for xC5 soon?
On my side, I am giving four sessions on various subject:
Take a look at the list of sessions and hands-on labs at: http://www.iiug.org/conf/2012/iiug/sessions.php
See you there!
JacquesRoy 120000A2MS 2,069 Views
Just a quick note to say that I've updated the "Informix Smart Meter Central" wiki.
The changes are in the following page:
I updated the section on how to convert TimeSeries into relational format. That should give you a good example on how to use the Transpose function.
The examples use the stores demo database with TimeSeries data. This way, you can run the examples yourselves.
I added a new video on the "Informix Smart Meter Central" wiki. It is a recording of a live demo accessing the TimeSeries data provided in the stores demo database in Informix 11.70.xC3 or higher. Some of you may have already installed that demo in their environment since the code is available on the wiki but there is a new twist. I added a section that shows the use of spatial and TimeSeries. It is demonstrated using a google map of the bay area (San-Francisco bay area in case you wonder). You can find this video with other videos on the collaterals page:
I think that makes for a nice addition to the demo. It will eventually make its way to the demo code on the wiki.
Take a look at the videos, I hope you enjoy them.
I'm traveling this week to talk to multiple people about TimeSeries. It should be a good week!
I've been silent for quite a while. That does not mean I have not been busy!
A lot of efforts has been put on TimeSeries over 11.70.xC3 and 11.70.xC4 and we are still going full steam ahead. We continue to improve its performance, scalability, usability and functionality.
I wanted to put together a repository of information so people can find it all (or most of it in one place. For this purpose, I put together a wiki on developerWorks that is dedicated to The smart meter support. It is still a work in progress but I believe it is a good start. you can find it using the tinyurl: tinyurl.com/InformixSmartMeterCentral.
Let me know what you think.
JacquesRoy 120000A2MS 2,410 Views
In the last few blog entries, I've been talking about TimeSeries. This time, I'd like to diverge a little for a change. Still there is a tie to TimeSeries
About a year ago, I went to a E&U conference. As you may know, Informix is making a push in this industry due to the advantages that TimeSeries can provide to this industry. In one of the sessions I attended, the presentor mentioned in passig the "Did you know?" video on youtube. Just the context when it was mentioned made me pay attention. I took a note and decided to look it up later. Last time I checked, it had had over 14 million viewing!
"Did you know?" starts with a global view of the world ("If you are 1 in a million in China, there are 1300 people just like you") and continues to talk about the evolution of the impact of technologies on our lives and its impact in the future.
Some other highlights:
Like it says in the video, we are living in exponential times.
Take a look at it, it's only 5 minutes of your time: http://www.youtube.com/watch?v=cL9Wu2kWwSY
JacquesRoy 120000A2MS 2,816 Views
We left off with an insert through the virtual table view. We created a container, a row type, a table, and a virtual table. What if we could simplify this? What if we could avoid creating a container?
One reason why you don't want to create containers could be that you have a lot of data to load and you would need a lot of containers. Would it be nice if Informix could help you with that? Informix can! In the Informix 11.70.xC3 release, we added a capability that does just that.
The new feature if referred to as auto create container. When you insert a new time series in a table and no container is specified, Informix will create one for you if needed. For example, let's take the following table:CREATE TABLE jroy (
loc_esi_id char(20) NOT NULL PRIMARY KEY,
) LOCK MODE ROW;
WE can insert a new TimeSeries without specifying a container:INSERT INTO jroy VALUES(1, "origin(2010-11-10 00:00:00.00000),calendar(tst15min),threshold(0),regular,");
If there is no container available, a container is created as we can see in the tscontainertable table:SELECT * FROM tscontainertable
partitiondesc autopool00000000 datadbs 16 16 4194538
This features goes a few steps further. If the table is partitioned over multiple dbspaces, Informix will create one container per dbspace and put them in a pool called autopool. It is possible to have the following inserts go through the pool in a round robin fashion to evenly distribute your time series over multiple container and dbspaces.
If you prefer to manage your containers tourseld, you can create your own containers and ut them in a specific pool so you can take advantage of a container pool. You can even create your own policy to decide where new time series should be located.
There is more to know about these capabilities. You can find out more in the information center starting at:
Informix TimeSeries is a specialized storage and retrieval mechanism that optimizes the processing usually done on this type of information. For this reason, it includes specialized storage called "container". A container is created in a dbspace. In fact, multiple containers can be created in a dbspace. A container is created using the TSContainerCreate procedure:
This command creates a new container called meter_cont in the datadbs dbspace. It is created specifically for time series elements of type meter_data (row type). Since we are talking about a row type, it could include anything a row type accepts. The only restriction is that the first column has to be a datetime year to fraction(5). Here's a simple row type that could be used:
The last two arguments represent the initial space allocation and the growth space allocation. This is similar to initial extend and next extend. A value of 0 resolves to the default of 16KB.
With this in place, we can create TimeSeries in a table. Let's start with the following definition:
We can insert a row in a table with an empty TimeSeries as follows
We now have a row in the table with an empty TimeSeries column. This is different from
Now, you may say: "Whoa! How do I insert data in that TimeSeries? Must be difficult".
The TimeSeries functionality includes a way to create a relational view on a table that contains a TimeSeries column. If the table were to include multiple TimeSeries column, you could create multiple "views", one for each TimeSeries column. This capability is provided through an Informix feature called the Virtual Table Interface (VTI). This is a capability that allows Informix users to make something look like a standard relational table. At this point, there is no need to describe this interface further. The Informix TimeSeries provides a stored procedure that facilitates the creation of that virtual table. For example, we can create a relational view on out ts_data table as follows:
This creates a virtual table called
If you want to insert into a timeseries, you simply use a standard insert statement. If the row does not exist, it gets created, if it is there, the TimeSeries column gets updated. Here's a simple insert example:
A simple standard SQL insert... How easy can it be?
We have a lot more to talk about. Next time, we'll start introducing some 11.70.xC3 capabilities. This is starting to get exciting! See you next time
JacquesRoy 120000A2MS 2,449 Views
In 11.70.xC3, we added some new time series capabilities. Why would you care?
Time series are found everywhere. It is simply data that is collected over time. It could be changes in stock price and transaction volumes. It could also be reading of your house electric meter. Readings could be done every 15 minutes for example to provide a much more accurate picture of how electricity is being used. Other time series examples include weather information, network traffic, thermal readings in a large data center, and so on.
One key characteristic of time series is that the processing always include a time component. For example, you want to get all the meter readings for one month for a specific customer. With this data, you can calculate daily consumption, running averages, etc. To do this type of processing, you need quick access to the specific range of data you want to analyze and you also need to get it in time order.
Informix provides a data type that is used specifically to optimize time series data. It also comes with a extensive set of functions used to manipulate these time series. The Informix TimeSeries provide three major benefits:
Informix TimeSeries also provides the ability to create relational views on top of your time series data. This opens the door to the use of standard off the shelf products to do things like reporting.
With this very brief introduction, we are now ready to talk about the improvements made in 11.70.xC3. This will have to wait until next time
JacquesRoy 120000A2MS 2,099 Views
Informix often adds features in fixpacks and xC2 and xC3 are no exception. I strongly suggest that you take a look at the list of new features that are listed in the release notice. You can find it at:
Release notice 11.70.xC3
In my next few blog entries I will not cover all the new features. I will limit myself to two main areas:
For anything else, see the release notice and the Informix documentation. The easier way to do this is to use the information center that can be found at:
This release added a few compatibility function that makes it easier to move application to informix. They include:
Take a look at the details of these functions. I'm sure you will find a good use for them.
JacquesRoy 120000A2MS 1,730 Views
July 1st was the 10 year anniversary of the IBM acquisition of Informix. Since the acquisition, Informix has releases version 9.3, 9.4, 10.0, 11.10, 11.15, and 11.70. A few days ago, we releases 11.70.xC3. Other recent addition include the Informix Warehouse accelerator that introduce game changing technology for the data warehouse/data mart area. Add to that the Informix-Genero for fast application development and mobile applications.
So much has happened in these 10 years. Go take a look at Informix. Download one of the free edition and give it a try. For people that think they know Informix, go take a look at the large number of improvements we've added to it over the years. Go visit www.ibm.com/software/data/informix and find out more about what is going on. The IBM Information on Demand conference is coming up. This is the best way to learn about the latest capabilities and network with Informix partners, customers, and IBMers. The conference is held in Las Vegas October 23-27.
Now that my major deliverables are done in xC3, I'll be back regularly to talk to you about these major improvements and how many people can take advantage of them.
Ten years with IBM and going strong. There is so much more to come. Stay tuned!
JacquesRoy 120000A2MS 1,983 Views
Hello everyone, I've been buries in development activities for the last many months. That explains why I've neglected my blog lately. I think I see the light at the end of the tunnel. Hopefully it's not a train...
One thing that will make me come out of my cave is the IIUG Informix conference.That's a great event that takes place this year in Overland Park, KS between May 15 and May 18. As usual, this is the perfect opportunity to network with other Informix enthusiasts and the Informix lab people. The architects will be there in force, make sure to let them know what you like and what you want. Also, we should not forget the great sessions you can attend. A lot of effort has been put into this. I'm sure everyone will enjoy the conference.
While you wait for the conference to learn more abot Informix, you could also read some of the technical articles published recently such as:
There is more so stay tuned.
JacquesRoy 120000A2MS 1,716 Views
It's hard to believe that we are already at the end of March. Seems like it should still be January. According to my blog, it still is January! I better get on with it!
Informix just came out with version 11.70.xC2. No big deal you may think. Wrong. It is a big deal! With xC2 we are making available a new edition: IBM Informix Ultimate Warehouse edition.
I'll be talking about this at the France Informix Users group next Monday. With this, queries that take hours now could take minutes. Some queries end up perfroming 100 times faster! For more information Look at:
And that's not all. We've been looking at ways to help our 4GL customers modernize their environment for years. We want customers to get more value out of their 4GL code and new application developments. the result: Informix Genero. Find out more at:
Stay tuned. The year is barely starting.
JacquesRoy 120000A2MS 2,733 Views
2010 has been a great year with the release of Informix 11.70 and 2011 is lining up to be a busy year with plenty of activities and execution on the plans of v.next. I also hope that in 2011 we'll see even more participation from the Informix community to continue to make Informix and the solutions around it better and more exciting.
Part of making Informix and its ecosystem system better is to share ideas and be exposed to ideas that may or may not be related to Informix. If you remember, in my blog entry of October 9, I talked about where good ideas come from. It is time that I divulge the source of these comments. It came from a site called TED. That specific presentation is Where good ideas come from
Here are a few other presentations I enjoyed from www.ted.com:
There are many more interesting talks in there. I hope you'll enjoy these short presentations. Who knows, by exploring these presentations and others, you may come back with a new outlook on how we can use Informix to make our world a smarter planet.
Someone asked me the following question:
"How do I keep passwords in the database so nobody can get them?"
It means that we cannot keep the the passwords in plain text in the database. Informix has a few functions that can be used for encryption: ENCRYPT_AES and ENCRYPT_TDS. It would be easy to create a table and encrypt the column that contains the passwords.
The next statement that came up was: "..but, if someone has the encryption password, he can get all the passwords. We need to protect the passwords from internal access".
This means that we need to use a different password to protect each password in the table. The solution I proposed was to use the password to encrypt itself. Let's look at an example:
CREATE TABLE passwd (
INSERT INTO TABLE passwd VALUES(1, ENCRYPT_AES("Jacques", "Jacques"));
INSERT INTO TABLE passwd VALUES(1, ENCRYPT_AES("Lance", "Lance0"));
INSERT INTO TABLE passwd VALUES(1, ENCRYPT_AES("Daniel", "Daniel"));
INSERT INTO TABLE passwd VALUES(1, ENCRYPT_AES("Umut", "Umut01"));
The values inserted look as follow:SELECT * FROM passwd
I can now test f someone has the right password for user 1 by using the password value to decrypt itself:SELECT col1, DECRYPT_CHAR(col2, "Jacques") FROM passwd WHERE col1 = 1;
If I use the improper password, I receive an error:SELECT col1, DECRYPT_CHAR(col2, "Jacques") FROM passwd WHERE col1 = 3;
26008: The internal decryption function failed
One more thing. Note that the encryption password must be at least six-character long. This is why in the example I padded some encryption passwords. An easy way to work around it would be to always add padding to make sure we meet that minimum size. Keep in mind that the maximum size of an encryption key is 128 bytes.
With this approach, we can keep passwords in the database and keep them secret.
JacquesRoy 120000A2MS 3,513 Views
Wednesday started with an Informix "eat and meet" breakfast followed by nine different Informix sessions spread throughout the day. My favorite session was: "How Hildebrand and IBM bring smart metering to homes across Britain". It was very interesting to see a real-time system where people can see their power consumption and compare it to a pool of similar housed to see how they are doing. The system does not only measure the total consumption at a home but can break it down to specific outlets. For example, some people were able to find out that their energy consumption was greatly impacted by their use of hair straitening devices. Another person could find out that they spent around 250 pounds per year to run their old refrigerator. Buying a new one for 200 pounds made it pay for itself pretty quickly.
Of course, the other presentations were also interesting. They covered areas such as building data warehouse, grid-based replication, Informix in the cloud and more.
An additional 11 sessions were held on Thursday to wrap up the conference.
The one thing that is hard to measure at a conference like this is the value of the interactions with other people. Discussions on different interests and new challenges, and also how Informix has been used. This ties into what I mentioned in this blog on Oct 9. Good ideas come form people interactions. The conference provided a good environment for that. This was a great conference and you can expect interesting things coming out of the Informix lab in the future. I'm sure we'll have a lot to say next time we meet: The International Informix Users Group (IIUG) conference in Overland Park, Kansas, that will be help between May 15 and 18, 2011.
JacquesRoy 120000A2MS 1,605 Views
Another year, another conference. It has been so busy that I have not had the time to write a short blog entry for each day. Here is my quick update.< /p>
It all started Saturday morning with the business partner council and the customer advisory council on Saturday. I attended the customer advisory council and I found it interesting and full of good discussions.
The conference was kicked off with an opening reception on Sunday night and we were off to the race. There were eight Informix sessions on Monday including presentations on how IBM helps Cisco, open source, hands on lab on high-availability, another one on the new features of Informix 11.70, bests practices for virtual environments, and performance enhancements. Of course, the most popular session was from Jerry Keesee titled: "Informix at IBM: The next decade".
The day ended with an Informix reception at the Mandalay Bay beach casino for an Informix 11.70 launch celebration and to start looking forward to the next decade of Informix at IBM.
Tuesday started early with an Informix "eat and meet" breakfast at 7:00am, followed by nine Informix sessions throughout the day. The sessions covered areas such as upgrade, new features, Informix warehouse, application development, 4GL, embeddability, flexible grid, and more. It was also interesting to hear about how Informix is used to run a steel plant.
The day ended with a beach party reception. Now it is on the Wednesday with another full agenda.
Yes, a new version of Informix is now available: Informix 11.70.
There are a lot of great features in this release. I could talk about the flexible grid that allows you to manage many machines like one and support rolling upgrades. I could talk about the new analytics features where we've seen speed up of warehouse-type queries of around 50%. I could talk about storage provisioning, improved installation and embeddability features. Yes, I could talk about all this but at this time, I want to talk about some features that should interest application developers.
I have to admit I am a little biased since my group is called application development services. However, the features I want to talk about were either requested by customers or have had a very positive reception in early mention under non-disclosure or during the beta period.
The first one will facilitate porting schemas from other databases to Informix. Let me first show an example:
CREATE TABLE tab (
The first improvement is the ability to change the order of constraints and default values. Before Informix 11.70, the col1 definition would have returned an error since the default clause had to be located before the NOT NULL constraint.
The second improvement is the ability to explicitly say that a column can accept NULL values. Before, it was implied if the NOT NULL constraint was not there.
The last improvement shown in the example above shows that we can add "ON DELETE CASCADE" after the constraint name.
Another improvement in the DDL area is the ability to conditionally execute CREATE and DROP statements. Here are two examples:
CREATE TABLE IF NOT EXISTS tab ( . . .);
If, for example, you want to make sure a table is re-created, you could always say:
DROP TABLE IF EXISTS tab;
If you want to make sure that you keep the table if it already exists, then don't do the "DROP IF EXISTS" and simply use "CREATE TABLE IF NOT EXISTS".
Finally, here's another DDL feature that was in great demand. It is not really an application development feature but it has been requested a lot: The ability to define the EXTENT size in a CREATE INDEX statement:
CREATE INDEX myidx tab(col1) FIRST EXTENT 8 NEXT EXTENT 8;
Don't forget to read the release notice since there are many other improvements on the INDEX capabilities.
On the DML side, we are now able to use expressions in the COUNT aggregate function. This can be useful if you want multiple aggregates in one statement:
SELECT COUNT(*) total, count(CASE WHEN sex = 'M' then 1 else NULL) males
Without this capability, you would have to solve this problem with three separate statements. For example:
SELECT * FROM