Hello, and welcome to my first blog post. In this blog I will concentrate on topics relevant to Informix. I'll cover new features, new releases, success stories, recent events, uses case, and general information relevant to the Informix community.
The first topic I'd like to cover is blogging itself. I know that many of you that are reading this are as interested in the success of Informix as I am. I know you have strong opinions about what Informix should be doing in terms of features, marketing, or sales. The fact is that you all make as much of an impact on the success of Informix as IBM does. Informix would not be as successful as it is without your support. What I am asking is that you make your opinions public by creating a blog yourself. Its a great way to help spread the word about Informix. If you are not an IBM employee and have a blog then that's even better. The point is that the more people we have spreading the word about Informix the better.
For recent events there are a couple I would like to mention. I've spent quite a bit of time in China over the last couple years talking with new and existing customers, the most recent being last week. What I have seen is that the excitement around Informix is building each year. A good example is the China Users Group for Informix. Two years ago there was no users group in China. Since most people there do not feel comfortable reading and writing in English it meant that our customers had a hard time getting the same information that is easily availble on the IIUG website. The CNIUG was established about 1 1/2 years ago and now is flourishing with 100s of people signed up. Their forum is very active and they have done a great job sponsoring numerous events. There is a 3 day Chinese IUG conference held each year plus many road shows and seminars. In addition to the users group we have also opened an Informix lab in Beijing.
The other event that is currently going on is the IBM sales and techincal enablement event going in Brussels. Fred Ho and Terri Gerber are there right now getting the IBMer's up to speed on our newest features and functions. Read Fred's blog, I believe he will talk about more about it. (And after Fred's blog have a look at Keshava Murthy's and Jonathan Leffler's blogs as well, both newly started).
Talking about new features, I think most of you already know that 11.50.xc6 is out the door and .xc7 will come out shortly. We did something a little different with 11.50 putting more features into our fixpacks. Here's a quick list of some of the things that came out in the 11.50 fixpacks:
External tables (from XPS)
Light Scan Support for all builtin datatypes (from XPS)
Connect By statement
Support for online attach/detach when there is no data movement
Many Open Admin Tool Changes (compress, auto update stats, ER plugin, etc...)
Many ER enhancements (new alters, new cdr check options, etc...)
Delay applying updates on a secondary
Backup from a secondary
XA transaction support on secondary
Security checks for improper symbolic links at startup
Multibyte character enhancements
Direct IO on AIX
Virtual appliance enhancements
Better Cloud support
This is just off the top of my head. As you can see we have been quite busy. All this is in addition to the features that were part of the initial 11.50 release, things like shared disk support. We are also working hard on the next release of IDS which will also be filled with a lot of new features and functions. As we get closer to the release date I'll be sure to keep everyone up to date on what we will have.
Well, that's it for now!
In case you had not heard there are two new analyst reports out on Informix - one from ITG and the other from Forrester. Both these reports are very favorable to Informix and I encourage you to read them. The ITG report compares the cost of Informix over time compared to SQL server and concludes that Informix is by far the less expensive of the two, about 1/3 less in fact. The ITG report can be found here: ITG report
The Forrester report focuses on the cost of Informix for a hypothetical global retailer with 4,000 instances of Informix. The conducted interviews with many of our global retailers including some of the biggest ones and concluded that Informix offers the following key strengths:
You can find the Forrester report here: Forrester report
Note that these reports are available to the public, but you will need to register before you can download them.
It's been a while since I posted my last entry. I've been meeting with customers quite a bit over the last few months, here in the US, in Europe and in China. I even managed to get caught in the Icelandic cloud along the way. I feel these last 6 months have been some of the most exciting times for Informix. There is huge interest in our next release which is code named Panther. On top of that we have a new executive VP of business development - Rob Thomas, a new enablement team led by Dilip Kikla, and our Informix editions have been completely revamped. In additon to all this a brand new market has opened up for Informix - the smart meters market in energy and utilities.
Earlier this year we had several prototypes in the works with various customers in the E&U field. Since these engagements were spread around the world - the UK, Iceland, and India - no one noticed at first that a trend was building. What was happening is that we were winning each of the opportunities and quite easily at that. The real break came when an article was published on our win in the UK with Hildebrand. With that it became quite apparent to many people that we were a great fit for smart meters.
Based on the information in article several different E&U teams began working with us to understand why Informix was such a good fit for smart metering. The bottom line was that the Timeseries datablade, which no other RDBMS has, was the key. Smart metering is a classic example of the sort of thing that the Timeseries blade was built for - collect massive amounts of time stamped data very quickly and simultaneously run reports and do analysis on that data. This is what smart meters do - they collect data about energy consumption at a residence or business and then periodically send the information to be stored in a database for billing and data mining purposes. Applications that access the data do so in timestamp order. Operations can be as simple as pulling data for a particular meter or could be finding the average daily usage for a particular zipcode or even correlating energy usage with weather data. This was exactly what Oncor, a smart meters provider in Texas, needed. They came to us looking for a way to handle 3.5 million meters and store data for 25 months. Their current solution based on Oracle was handing about 1 million meters and was taking too long to ingest the data and run reports. A proof of concept (POC) was setup and the result was that using Informix plus the Timeseries blade resulted in load times going down from multiple hours to about 18 minute for a days worth of data for all meters. Also, reports that were taking hours to run would complete in 6 minutes with Informix, and in seconds if the data was already cached. Not only that but with the intrinsic disk space savings you get with the Timeseries blade disk space when from 1.3 TB down to about 350GB for 90 days worth of data for 1 million meters. With these kinds of results Oncor became a strong champion of Informix and have been reaching out to their customers to encourage them to have a look at Informix.
In addition to customers in the US other smart meter providers around the world have heard of these results and have begun contacting us. This has led to additional meetings and POCs with customers in the US, the UK, Holland, Denmark, and Germany, and I'm sure many more will follow.
Although this blog concentrates on smart meters there are many other applications that work with time series data within E&U. I'm sure as we get more established in the smart meter business there will be additional opportunities in other areas of E&U.
kbrown3 060001S1HU Tags:  informix availability high scale-out replication scalability 2 Comments 4,278 Views
With the IIUG conference around the corner I have been looking through a fair number of presentations on Informix. On the one hand I am excited about all the great technology we have and am sure the attendees will be very happy with the content. On the other hand I also know that many of these presentations will be used with new and prospective customers. What strikes me is that while the presentations do a great job at describing what we have done they do less well describing why we have done it. Looking at isolated features it's hard to get a sense of why anyone would need them or how they integrate with the other features we have. For people that know our product this is not such a problem. If you know our product it is much easier to see how and why we implemented many of our new features.
An example of what I am talking about is replication. We've done a great job presenting each function individually - HDR, ER, RSS, SDS, CDC, etc... but what is often missing is how these features fit together. Its sort of like describing a car in terms of its parts -
All of these are useful in their own right in many situations, however if you don't present the bigger picture then your listener may miss the point. This is basically what I think we often do with our functionality. We talk about the features in isolation without referring to the bigger picture.
To resolve this Madison, myself, Fred, Jasna, and many others have been rethinking ways to present our high availability and scalability so that people realize that we have a seamless suite of functionality rather than a bunch of disparate individual features. Since the lower level presentations seem pretty complete we concentrated on the high level messaging.
The 10,000 foot view is that we have the most comprehensive high availability and scalability solutions around. With Informix high availability we provide "always on" capability in nearly any environment and we are able to do it in the most cost effective way possible. From a scalability point of view we are able to get great performance whether we scale out locally or globally, or when we scaling up. It makes more sense to talk about scenarios where high availability or scalability is needed and how we can solve them rather than starting with describing the features then talking about use cases. If you are unfamiliar with Informix these high level statements and use cases make much more of an impact than saying "We have HDR/RSS/SDS/ER/CDC/CLR... and you really need to hear how each of these work". After this overview is given there will be plenty of time for drilling down into the functionality that makes this possible.
The intent here is not to pick on high availability, we have many features which need to be presented in a more user friendly way. I feel Informix is the best database around and we should look for every opportunity to promote it to as many people as possible, especially those not already familiar with Informix. Enhancing our message to be clearer can only help.
kbrown3 060001S1HU Tags:  #informix devices databases e&u informix timeseries mobile 1 Comment 3,287 Views
Today, July 1st 2011, is the 10 year anniversary of Informix being acquired by IBM. I thought I would write a little bit about what we have accomplished over the last 10 years. I think the list is quite impressive
First, we have had quite a few releases:
9.3 (Rocky) in 2001
9.4 (Patriot) in 2003
10.0 (Tuxedo) in 2005
11.1 (Cheetah) in 2007
11.5 ( Cheetah 2) 2009
11.7 (Panther) 2011
Each of these is represents a major effort and significant new functionality. In addition to these major releases we had many features released in fixpacks as well. Here is a list of some of the main features from each of these releases:
Moving forward the future is bright. We have a lot of innovation in the pipeline scheduled for the next 10 years and beyond. With our recent successes in new markets such as Energy and Utilities and embeded/Mobile Devices we are not only concentrating on existing customers but on expanding our business to customers that have not traditionally used Informix. IBM is making this possible through collaboration with other industry teams as well as IBM research centers.
I look forward to another 10 years of innovation and growth!
It's only been a couple weeks since my last entry but a lot has happened. First, our push into smart meters is gaining traction. Fred, Keith Hall from GBS, and I flew to Minnesota to visit with a company that does both smart meter infrastructure and analytics. They are currently using Oracle but are having a hard time getting their analytics done in a timely manner. We spent the day with them talking about how we would go about solving their problems and in the end they could see that we offered a solution with our time series that is not available in Oracle and that it would significantly reduce the amount of time it would take to do their analytics. A POC will follow shortly.
A few days after the Minnesota trip II flew to China to meet with a number of IBM teams as well as many of our customers. As I've mentioned before Informix has a huge install base in China and the opportunities are tremendous for us. On this trip I started with a visit with our enablement and support teams in China. Both have been doing a great job driving new business and insuring our customers remain happy. Here's a picture of Young-yi, Guo Rong, Hong Tao, and Ye Xie from our enablement team during dinner the first night:
And here is a picture of one of our lead consultants in China, Lu Chuan on the right, (along with Young-yi and one of the IBM sales reps in China):
The other IBM team I met in China was the E&U research team. They had heard of our success in other parts of the world and wanted to learn more. Yun Wang, an IBM fellow, came to the meeting and after much Q&A came to the conclusion that Informix was able to offer unique technology that was well suited to smart meters and he offered to help us push Informix in China. This is great given that China is just starting to push smart meters in several of their cities.
The rest of the week in China was spent visiting customers in Shanghai, Nanjing, and Shenzhen (if this is Tuesday it must be Shanghai :-) ). The customers I met with ran the gamut from Telco, to banking, to retail. Most of these are major international companies whose business depends on Informix. One of the more interesting meetings was with a large retail bank with branches throughout China. Walking into their development center was like walking into NASA's control center. There were probably 50 floor to ceiling screens showing everything from network connectivity, to onstat output, to weather information. Probably over 100 engineers were seated on this one floor, and with another 10 floors you can imagine the number of people working on development here.
In all cases the customers were very interested to hear about the new features coming in our next release (due in October this year). Depending on their business all found something in the release that was of interest whether in the area of replication, or warehouse, or application development.
Next up in China is a new Informix conference. The last I heard this will probably happen in December and will be comprised of a full day of technical and business. I'll post information here as I learn more.
When I talk with customers I know they're impressed with the Informix technology but been less impressed with the marketing of Informix. To address this the marketing team has started the "Discover Informix" campaign. This is just the beginning of an initiative to let you know the marketing team has heard your suggestions and is taking action. The kickoff for this initiative can be found at http://www-01.ibm.com/software/data/informix/discover-informix/hassle-free.html .
If you click on the link you'll see that it outlines the strengths of Informix (performance, easy to use, reliable, highly embedable, etc...) and why these qualities are so important to our customers (with some customer testimonials). You'll also find links to an ROI tool which can calculate how much money you can save by using Informix, demos, e-books, and other important information. I also think the look of the site and the organization is a step beyond what we usually see on other IBM web sites.
As a follow-on there will be a kickoff event in New York on Tuesday, March 16th. Presenting at this event will be:
This will be the first of many such events in many countries around the world. IBM as well as our business partners will begin hosting these events to drive more momentum for Informix.
Keep an eye on the web site mentioned above as well as my blog for more information about this campaign as it progresses.
It's finally here - Informix 11.7 (Panther) has been launched! See the press release here:
(old info: 11.7 Panther Launch)
If you have not been involved with any of the Panther EVPs and have not been present in any of the Panther presentations here is a quick rundown of what it provides:
Ease of use/Embeddability
What is Flexible Grid?
Flexible grid allows you to build a grid of servers using different hardware, operating systems, and versions of Informix. Just name the servers you want in your grid and give your grid and name and you are up and running. Administration is a snap, letting you add or remove servers as an online operations. You can also do administration through any node, just designate one or more nodes as your administration node(s) and from there you can add tables, alter tables, add dbspaces, run stored commands on all nodes in the grid. You also get the option of replicating just schema changes or schema and data changes. In addition to administration the flexible grid also allows you to spread your workload. Since often your grid will be spread across a wide area, perhaps across the country or world, you are given the option to define an ordered list of servers to attach to. This way you can favor servers geographically close to you, and if that node fails have your workload go to a node that is farther away.
For warehouse we have moved a number of the best features from XPS and moved them into Informix. For anyone that has known Informix for a while this is very similar to what we planned for our "arrowhead" project, only instead of moving Informix features to XPS we are going from XPS to Informix. For Panther we have move optimizations for star and snowflake queries to Informix. We've also moved the multi-index scan over which allows our optimizer to take advantage of more than one index during query processing. We've also completed our support for light scans and appends so that all datatypes are now supported.
We've also added on other warehouse features not in XPS. These include the ability to define an interval to fragment your table by (such as monthly) and then Informix will automatically add new fragments as neccessary.
Ease of Use/Embeddability
We have leveraged the dbscheduler to add more tasks that can be scheduled to automatically run. This means that Informix relies even less on DBA's to manage the day to day operation. These new tasks do thing like monitor table usage to see if any can benefit from compression and look for user that have been idle too long. We have also automated the tasks of adding space to tables. You can now set a policy that tells the system when to automatically add space to a table or extend a chunk and where to get the space from. We also now ship Timeseries support with Informix. In addition there is no longer an extra step to register the Timeseries into your database, this will be done automatically on first use. The same is true of BTS and Spatial, these datatypes can be used as needed without prior registration. Another area we have worked on is self configuring. Now it is possible to configure Informix so that it will automatically take advantage of whatever CPUs or memory is available on the machine it is being installed on.
We wanted to make it easier for people to develop applications and decided the easiest way to do that would be to feel their pain. We created a team to focus on open source products and to see how hard it was to get them to work with Informix. From this effort we got two things - we got a bunch of open source products that now work very well with Informix such as Hibernate, Drupal, xWiki, and MediaWiki, and we also made a number of simple but powerful changes to Informix that made it much easier to port these applications. These changes were mostly in the area of new SQL syntax, for instance the ability to create an object if it does not already exist, or drop an object only if it does exist, allow NULL columns, and expressions in aggregates. The other thing we did was to add an SPL debugger. This debugger will work with IBM Optim Studio and soon will work with Visual Studio. Finally, we are now also bundling Mashup hub with Informix.
In the area of security we have greatly improved our auditing facilities with selective row level (SRL) auditing. With SRL you can greatly reduce the impact auditing has on performance as well as be much more selective as to what gets audited. The result is better performance and less audit records to look through. We have also added trusted context to Informix. This feature is useful in a multi-tiered environment, for instance where you have an App server talking to Informix. With 11.50 you would have to create a connection from the App server to Informix and multiplex all your users through that connection. The result would be that all users would look the same to Informix. With trusted context you are able to let the App server do the authentication and then send a message to Informix to say what user a particular query is being sent on behave of. This way you can distinguish user 1 from user 2 from ... without shutting down and restarting the connection to Informix.
We always strive to improve performance in general and 11.7 is no exception. We have increased OLTP performance overall by 10% and warehouse queries have been improved by as much as 10 times! in the case of queries that benefit from star or snowflake join optimizations. In other areas we have also introduced a new indexing method called Forest of Trees (FOT). When you use a btree you can run into a problem with contention at the root node. Even when all your applications are reading the btree each will have to get a read lock on the root node and this can lead to serialization in a heavily used system. FOT reduces this problem by creating a hash table and then creating a btree under each has bucket. This means that contention is reduced on the root node since there are now many root nodes, and it also reduces the depth of the btree which can lead to better performance as well.
These are just a few of the new features that have been introduced in 11.7. There will be plenty of opportunities to learn more in the next few weeks. We have already started doing our analyst briefings, a number of new articles will be getting published, lots of information and sessions will focus on 11.7 at the IOD conference in 2 weeks. Finally, if you are interested in hearing more and don't know where to turn you can always send me a note!
I'm adding this entry at 35,000 feet. I'm on the way back from a customer meeting and the first thing I see as I enter the plane is a large WiFi sticker on the slide of the plane. It turns out that Delta now has 1800+ planes flying with WiFi enabled. I paid $12.95 and now I'm able to anything on my laptop that I could have done back home (although I must admit I'm quite a bit more cramped). This may not be that new, but its the first time I've had a chance to do this and I think it is pretty cool.
The other thing I think is pretty cool that I wanted to share is Informix support for time series data. Infomrix is the only relational database that has access methods and functions designed specifically for time series data. In case you were wondering, a time series is a set of related data that changes over time. For instance the stock trades for IBM, or the smart meter energy usage, over time. In a way you can think of a time series as an array only instead of asking for the 1st, 2nd, and 3rd records in the array you can ask for the Jan 1st, Jan 2nd, and Jan 3rd items in the array - in other words an array accessed by time. Typical access to time series data is by time range, meaning queries tend to look at all the data in a time range before moving to another time series.
Informix has taken advantage of this and built storage which is optimized for this kind of access. We cluster data for a particular time series to minimize the number of I/O's needed to retrieve data for a particular series. This means that if you want all the IBM stock trades for Jan 1st, 2nd and 3rd we insure that those pieces of data are clustered together on the same physical disk page.
The other thing we do is a sort of compression to insure the data is small as possible. One of the things we found is that quite often time series data can be sparsely populated. Because of this we only create pages for time series where there is actually data. For instance, if you had stock data for IBM for 2007 and 2009 but not 2008 we will not reserve any space for 2008. Later when that data becomes available we will add it into the series.
Another way we save space is that we do not store any NULL value columns. If you define a time series to hold 6 columns of data and you enter a record that has 2 columns that contain NULL those NULLs will not take any space. This is because we add a header to every record that indicates which columns (if any) are NULL. Relational databases use a value to indicate NULL typically, which means NULLS take the same amount of space as non-NULL values. Since NULL columns tend to be common in time series this can lead to a lot of space savings.
We also save space by not storing the timestamps if we don't have to. We can do this because it is very simple to calculate the timestamp of intervalized data. For example if you are storing smart meter data that comes in every 15 minutes all you need to know is the timestamp of the first entry in the time series. After that it is very ease to do the datetime math to calculate the timestamp for every other entry in the series.
To finish up with space savings, there is also space save by not requiring an "id" attached to each record. For example, if you had a time series of stock data and used a standard relational schema you would have to add some sort of stock id to each record so that you determine what stock the data belogned to. With our time series approach this is not required. We would store the "id" once and then every record would be associated with that entry and not need to have an "id" attached.
We have just completed a POC for a smart meter company and one of the things that drove them to look at Informix and time series was this space savings. In their case they have 3.5 million smart meters each generating a record every 15 minutes, and they want to save 25 months worth of data. Doing the math you get:
3,500,000 * 96 intervals/day * 760 days = 255,360,000,000 (about 255 billion records!)
Right off the bat we are going to save the size of an id which is 8 bytes, the size of a timestamp which is 11 bytes, so 19 bytes times 255 billion is quite a savings. This is pretty much what we saw at the POC - we used about 1/3 the space that Oracle did.
The other savings will be in the number of rows that have to be managed. In the case of time series there is one huge row per meter - so 3.5 million rows vs 255 billion rows. Index maintenance, statistics maintenance, storage management all become quite a problem.
All in all I think this technology is pretty cool and something which we should use more often. In another entry I will go into some detail about the query side of time series which is equally a good story. I'll also talk about how the time series fits with BI queries.
This October 24-28 IBM will be hosting its Information on Demand Conference in Las Vegas and I urge you to come and attend. If you have not been to one of these events its a great opportunity to meet the Informix senior developers as well as the executive staff. In addition there are many hands on labs and sessions to attend. In fact we will also be covering the content of our Panther release at this conference so if you have not been part of the Panther EVP this will be a great opportunity to get a deep dive into the newest features and functions. If you are interested in going you need to hurry to take advantage of the early bird special ($500 off the normal fee) which expires on Aug 31. Here is the registration link: http://www-01.ibm.com/software/data/2010-conference/registration.html
The other thing that we will doing in Las Vegas is conducting our customer advisory council meeting. The CAC meeting is used to discuss a range of topics such as what our future direction should be, discussion of customer solutions, support, documentation, etc... It is also a good opportunity to meet with IBM execs. In the past we have had meetings with Ambuj Goyal, Arvind Krishna, and Alyse Passarelli. If you are interested in becoming part of our customer advisory council please let me or anyone else on our team know. We're always happy to have more passionate Informix advocates in the CAC - it helps us make Informix a better product for everyone and helps you communicate directly with the Informix decision makers.
Following on theme of communicating with IBM executives I wanted to mention that the Informix team had a chance to spend a few days with Martin Wildberger, VP of Information Management Development, last week in Lenexa Kansas. Martin took over from Arvind Krishna last year around October. It was great to find that Martin has the same dedication and devotion to the Informix business as Arvind. If you come to the IOD conference you should seek out Martin (or any of the other IBM execs) and introduce yourself.
I wanted everyone to be aware that a continuous availability white paper is being written and you can have your voice heard by going to http://www.advancedatatools.com/Informix/Survey.html and filling out the survey. I would urge all of you to please take the survey, and blog about it if possible. Here is more information about what is being done: