I was talking earlier about encapsulation and the collection of objects that can be found in another object. Let's look at another possibility:
A corporation has multiple regions, a region has multiple branches, a branch has multiple customers. To summarize:
Let's say that the customers are loans taken by different types of companies. To find out the average amount of the loans given out by each branch, the strict approach would be that each branch has a method (function) that does the following:customer_count = 0
total_loans = 0
for each customer
customer_count = customer_count + 1
total_loans = total_loans + customer.getLoanAmount()
end // for each customer
return(total_loans / customer_count)
We protect the encapsulation of customers by providing a method that returns the loan amount (getLoanAmount). The first problem we have relates to performance: All the customer objects for a branch need to be instantiated (created). That may require quite a bit of memory. The second performance problem is that each customer object instantiation requires one database call.
What about if we want to do this average at the region level instead of the branch level? Then, to preserve the encapsulation, we need to created additional methods to return totals and counts. I'll let you imagine the processing needed. On the performance side, we see that the number of objects instantiated and the number of database calls increase with the number of branches and customer objects processed.
If you can convince the architects and programmers to relax their encapsulation requirements, you could add one method at the branch level, one at the region level, and even possibly one at the corporation level to return the desired average. Considering the average for a region, the method would implement the one SQL statement looking like:SELECT AVG(loan) FROM customers
WHERE region_id = :region_num
GROUP BY region_id;
In this case, I don't instantiate all the customer (and branch) objects, saving processing and memory. It is pretty obvious that the performance of these requests will be greatly improved compared to the "strict" OO approach.
Having a method that uses the database to do the processing is one thing. What about more complex processing like the average risk taken by a branch on their loans?
IDS provides the ability to implement user-defined aggregates. It would be easy to implement the average risk function. The number of lines of code would be less than implementing it in the application and the performance would be better even if it was only because of the significant reduction in the volume of data transferred.
I hope that in the last few blog entries I gave you some things to think about to improve the overall performance of your systems. The bottom line is: get involved in the analysis and design phases of new projects. You can add a lot of value there.
A while back, I started reading a book called "Thinking, Fast and Slow" from Daniel Kahneman.
Daniel Kahneman is a professor of psychology who won a Nobel prize in economic.
I have to admit, I am not done reading it. I need more "plane" time
What I read so far is fascinating. This is the type of book that can be read multiple times.
Today, I just want to relate some parts of chapter 14 where he put together a test to see how people would classify individuals
based on some personality descriptions. Here is the description:
"Tom W is a high intelligence, although lacking is true creativity.
He has a need for order and clarity, and for neat and tidy systems
in which every detail finds its appropriate place His writing is
rather dull and mechanical, occasionally enlivened by somewhat
corny puns and flashes of imagination of the sci-fi type. He has a
strong drive for competence. He seems to have little feel and little
sympathy for other people and does not enjoy interacting with
others. Self-centered, he nonetheless has a deep moral sense."
After reading the description, the subject was asked to figure out which field of study Tom was most likely in.
The description was actually designed so people should rank computer science among the best fitting
because of 'hints of nerdiness ("corny puns")'.
I laughed out loud when I read that part. I immediately though of one of my co-worker, Robert U., that
reminds me regularly that I make corny jokes during my presentations. And yes, I graduated in computer science.
For those who read this blog, if you make corny jokes/puns and graduated in computer science rejoice.
Embrace your nerdiness. You picked the right major
The book is full of interesting information including the fact that even statisticians can misuse/misinterpret statistics.
One I really like is:
"you dispose of a limited budget of attention that you can allocate to activities. . .
You can do several things at once, but only if they are easy and undemanding."
My conclusion: if someone tells you he/she's multitasking, they do trivial work.
First, let me put an end to the rumor that the IIUG conference was moved to San Diego to accommodate me.
It is true, I live in that area. It is also true that I am presenting my fair share of material but I can assure you that not even one passing thought on my location was part of the decision
This being said, the conference is approaching quickly. One more week in March and then a few weeks in April and we're there.
As usual the conference organizers are trying to outdo themselves year after year. This year is no exception. What happened since last year?
For one, Informix 11.70.xC3 was just out then. Since we've seen xC4 come out. Can we hope for xC5 soon?
On my side, I am giving four sessions on various subject:
- Dummies guide to TimeSeries
You want to get started with TimeSeries, come to this session.
- Informix applications uncovered on iOS
Yes, you can teach an old dog new tricks. At least this old dog is trying to prove it.
- PHP and Informix
Web applications have established themselves as mainstream. If you don't know PHP and the web, come see me.
- Update on Infomrix and open-source
Some progress there, come to this session and let's have a discussion.
I think these are interesting subjects. You should look for a lot more interesting sessions at the conference.
Take a look at the list of sessions and hands-on labs at: http://www.iiug.org/conf/2012/iiug/sessions.php
See you there!
I've been silent for quite a while. That does not mean I have not been busy!
A lot of efforts has been put on TimeSeries over 11.70.xC3 and 11.70.xC4 and we are still going full steam ahead. We continue to improve its performance, scalability, usability and functionality.
I wanted to put together a repository of information so people can find it all (or most of it in one place. For this purpose, I put together a wiki on developerWorks that is dedicated to The smart meter support. It is still a work in progress but I believe it is a good start. you can find it using the tinyurl: tinyurl.com/InformixSmartMeterCentral
Let me know what you think.
I listened to a presentation on this subject recently.
What I found interesting is that the research found that good ideas do not come from a Eureka moment. For example Darwin recounts his Eureka moment in his auto-biography. Further study of his personal journals show that Darwin had the full theory of natural selection many months before his stated Eureka moment.
According to research, most good ideas come from discussions:
- The rise of coffee houses is credited for the Enlightenment period in England
- In research labs most ideas come from the weekly lab meeting when people share their mistakes, issues, etc.
Another interesting point was that many good ideas come from the connection of people that share their thoughts to form a complete idea that is worth pursuing.
How can we generate good ideas we can act upon and make our environment better?
We need to interact with people in a situation that is conducive to generating these ideas. We have such an opportunity in just a few week: The Information on Demand (IOD) conference in Las Vegas, October 24-28.
Think about it: we will be with a bunch of people that have technical problems to solve around the use of technology in general and Informix in particular. We'll listen to presentation on new features, solutions in different industries, best practices, bird of a feather sessions, mingling in social settings such as the Informix celbration on Monday night.
Let's take advantage of this great opportunity! See you in Las Vegas!
I just had a need for a function that takes a datetime year to second and returns the number of seconds since January 1, 1970.
That would be easy to do by writing a "C" UDR but I did not want to deal with compiling and installing a shared library so I decided to approach it as an SPL routine.
Not that it is a great thing but I thought I'd share it with whoever needs it. Let me know if you find this useful:
CREATE FUNCTION epoch(dt datetime year to second)
DEFINE dt_varchar varchar(20);
DEFINE mm, dd, yy, days, hh, mi, ss integer;
LET mm = MONTH(dt);
LET dd = DAY(dt);
LET yy = YEAR(dt);
LET days = MDY(mm, dd, yy) - MDY(1, 1, 1970);
LET dt_varchar = dt;
LET hh = substr(dt_varchar, 12, 2);
LET mi = substr(dt_varchar, 15, 2);
LET ss = substr(dt_varchar, 18, 2);
return (days * 86400) + (hh * 3600) + (mi * 60) + ss;
You can use it either in an SQL statement or directly with EXECUTE FUNCTION. For example:
EXECUTE FUNCTION epoch("2008-11-25 08:32:45");
1 row(s) retrieved.
When I was in school I wanted to know why I had to learn something: Why learn about history? It’s about a bunch of dead people, often from far away. I would also ask: Why would I ever learn English. . .
I feel that the computer industry does not only forget about history but is quick to discard what has been done before. Just remember when object databases came out, the trade magazines where trumpeting the death of relational databases.
There is a disconnect between the object-oriented (OO) approach and the use of relational databases. This will be the subject of the next few entries. Lets start with an example:
An object person will look at the employees of a company and see managers, full-time employees, part-time employees and contractors. This will lead to the following model:
With the definition of the multiple types of employees, we can easily see that they will want multiple tables, one per defined object. Of course, for a database person, we see something like:CREATE TABLE employee (
Empno int PRIMARY KEY,
mgrNo int ,
. . .
As you can see, we can already see that a "data access expert" can start some discussions with the OO architects and programmers.
Don’t get me wrong. I like OO. I think it is a wonderful approach but just like anything it can be abused. See what you think of:http://csis.pace.edu/~bergin/patterns/ppoop.html
I'm always looking for interesting information to stimulate my thinking.
My morning routine usually starts at around 5:30am and I use my tablet to look at news, blogs, tweets, and some web sites.
As part of the tweets I get, it includes some from a site called TED. I've talked about TED before. Take a look at my blog entry for January 2011: Happy new year!
In this blog entry, I recommended no less than four TED presentations.
For people that don't know TED, it is an organization that organizes conferences on all sorts of subjects. The presentations used to be have to be 17 minutes.
Now, you can find presentations that can also be much shorted. TED's tagline is: "Ideas worth spreading".
So, in the morning, I often check what's new on TED to see if there is something interesting to watch during breakfast (of course, when I have breakfast alone...).
I recently came across one that I thought was interesting considering everything we've been hearing over the last 4-5 years about the global economy.
Of course, the fact that it talks about complexity and emergence is just a bonus.
Here is the link to this presentation: Who controls the world?
Someone asked me the following question:
"How do I keep passwords in the database so nobody can get them?"
It means that we cannot keep the the passwords in plain text in the database. Informix has a few functions that can be used for encryption: ENCRYPT_AES and ENCRYPT_TDS. It would be easy to create a table and encrypt the column that contains the passwords.
The next statement that came up was: "..but, if someone has the encryption password, he can get all the passwords. We need to protect the passwords from internal access".
This means that we need to use a different password to protect each password in the table. The solution I proposed was to use the password to encrypt itself. Let's look at an example:
CREATE TABLE passwd (
INSERT INTO TABLE passwd VALUES(1, ENCRYPT_AES("Jacques", "Jacques"));
INSERT INTO TABLE passwd VALUES(1, ENCRYPT_AES("Lance", "Lance0"));
INSERT INTO TABLE passwd VALUES(1, ENCRYPT_AES("Daniel", "Daniel"));
INSERT INTO TABLE passwd VALUES(1, ENCRYPT_AES("Umut", "Umut01"));
The values inserted look as follow:
SELECT * FROM passwd
I can now test f someone has the right password for user 1 by using the password value to decrypt itself:
SELECT col1, DECRYPT_CHAR(col2, "Jacques") FROM passwd WHERE col1 = 1;
If I use the improper password, I receive an error:
SELECT col1, DECRYPT_CHAR(col2, "Jacques") FROM passwd WHERE col1 = 3;
26008: The internal decryption function failed
One more thing. Note that the encryption password must be at least six-character long. This is why in the example I padded some encryption passwords. An easy way to work around it would be to always add padding to make sure we meet that minimum size. Keep in mind that the maximum size of an encryption key is 128 bytes.
With this approach, we can keep passwords in the database and keep them secret.
Yes, a new version of Informix is now available: Informix 11.70.
There are a lot of great features in this release. I could talk about the flexible grid that allows you to manage many machines like one and support rolling upgrades. I could talk about the new analytics features where we've seen speed up of warehouse-type queries of around 50%. I could talk about storage provisioning, improved installation and embeddability features. Yes, I could talk about all this but at this time, I want to talk about some features that should interest application developers.
I have to admit I am a little biased since my group is called application development services. However, the features I want to talk about were either requested by customers or have had a very positive reception in early mention under non-disclosure or during the beta period.
The first one will facilitate porting schemas from other databases to Informix. Let me first show an example:
CREATE TABLE tab (
col1 int NOT NULL default 0,
col2 int NULL,
col3 integer REFERENCES tab1(col1) CONSTRAINT tab1_c1
ON DELETE CASCADE
The first improvement is the ability to change the order of constraints and default values. Before Informix 11.70, the col1 definition would have returned an error since the default clause had to be located before the NOT NULL constraint.
The second improvement is the ability to explicitly say that a column can accept NULL values. Before, it was implied if the NOT NULL constraint was not there.
The last improvement shown in the example above shows that we can add "ON DELETE CASCADE" after the constraint name.
Another improvement in the DDL area is the ability to conditionally execute CREATE and DROP statements. Here are two examples:
CREATE TABLE IF NOT EXISTS tab ( . . .);
DROP PROCEDURE IF EXISTS my_proc();
If, for example, you want to make sure a table is re-created, you could always say:
DROP TABLE IF EXISTS tab;
If you want to make sure that you keep the table if it already exists, then don't do the "DROP IF EXISTS" and simply use "CREATE TABLE IF NOT EXISTS".
Finally, here's another DDL feature that was in great demand. It is not really an application development feature but it has been requested a lot: The ability to define the EXTENT size in a CREATE INDEX statement:
CREATE INDEX myidx tab(col1) FIRST EXTENT 8 NEXT EXTENT 8;
Don't forget to read the release notice since there are many other improvements on the INDEX capabilities.
On the DML side, we are now able to use expressions in the COUNT aggregate function. This can be useful if you want multiple aggregates in one statement:
SELECT COUNT(*) total, count(CASE WHEN sex = 'M' then 1 else NULL) males
COUNT(CASE WHEN sex = 'F' then 1 else NULL) female FROM tab;
Without this capability, you would have to solve this problem with three separate statements. For example:
SELECT * FROM
(SELECT COUNT(*) AS total FROM tab ),
(SELECT COUNT(*) AS male FROM tab WHERE sex ="M"),
(SELECT COUNT(*) AS female FROM tab WHERE sex="F");
These are just a small part of the new improvements in Informix 11.70. Make sure you read the release notice to learn more about Informix 11.70 at:
There is so much going on!
As you surely know, we've been doing a closed beta of the next version of Informix. We have received a lot of great feedback and we keep on working on this release.
We still can't talk about it. It is just a matter of time before we can do so stay tuned.
On other fronts, I am working on a follow up to my Application development short book. I've received a lot of positive feedback on this book and I am excited about continuing on the subject. When will it be ready? I'm hoping sometime this year.
Finally, do you realize that we are barely more than a month away from the Information on Demand (IOD) conference? I hope to see you there.
I ran into a simple problem the other day: I got an error while creating an index because the key was too big to fit in my index. As you may remember, the maximum size of an index key on a standard Unix/Linux system is 387 bytes.
Why do we have this limit?
This is a function of the page size and the way a B-tree index works. With the limit of 387 bytes on a 2K page, we can have at least 5 keys per page. This way, we divide the data in at least 5 parts at each level. the end result is eliminating comparisons to get to our our result faster. If we had only one key per page, it would be the equivalent of doing a sequential scan so the index would be useless.
In IDS version 10.0 (2005), Informix introduced the configurable page size. from that point on, it is possible to create DBspaces with page sizes of up to 16KB in size. the page sizes available has to be a multiple of the basic page size: 2KB or 4KB.
These larger pages can provide better performance when you have a wide table where the row size could be, let say 12KB. This way, you can fit an entire row in a page instead of using page chaining to support these larger rows. The savings in I/O could make a noticeable difference in performance in many situations.
Coming back to my indexing problem, I can fix it by using a larger page size. According to the documentation, the maximum index key size is as follow for each page sizes:
max key size
If your key fits in a 2KB page (shorter than 387 bytes), you could still use a larger page size for your index. The difference is that more keys would fit in one page so the index will not be as deep so it could provide additional performance.
Why not simply use the 16KB page size everywhere?
The short answer is that you could waste space on the page used for a table. A page can include a maximum of 255 rows. If your page size is 16KB and your row contains only two integers (2 x 4 bytes), you could, in theory, have over 2000 rows in that page. Since we are limited to 255 rows, we are wasting over 14,000 bytes.
Why not use four or five different page sizes?
Each page size requires its own buffer pool. We have to decide how much memory to allocate for each of these pools. Our decision may not result in the optimal memory allocation. The result is that some pools will have too much memory and others would benefit from more. Bottom line, this would make system administration more complex.
I would suggest to limit ourselves to two page sizes. The default page size and another one. The second page size depends on the environment requirements. I would also look at the size of the I/O on the particular machine and how many requests do multiple I/O on sequential data.
If you haven't looked at the configurable page size in IDS, maybe it is a good time to do so now.
There was a big change for me this year: I left the Informix CTE group to lead a new group. I am now a manager... and architect.
My new group is called Application Development Services. This mean that my group looks at IDS from a programmer point of view. Let me give you an example of what that means. Let's look at the major features included in IDS 11.50.xC6:
Backup from an RSS server
Dynamic listener threads
View event alarms
Basic Text Search enhancements
MERGE statement enhancements
I care about these features but I my attention goes to a feature of the new Client SDK that deserved a one line mention in its release notice:
"When you install Client SDK or IConnect, you have the option to install IBM Data Server Driver version 9.7. For more information, see the Client Products Installation Guide."
As you may remember, the long term direction for client applications is to use the DRDA interface to IDS. With this one line statement, I can now write programs using CLI (ODBC) without having to have to figure out where to get the driver. Since IBM has multiple packages available, I could have easily made the mistake of thinking that I need to download the entire DB2 client (about 600MB) to get this functionality.
In addition, this is all I need to build PDO_IBM for PHP applications or IBM_DB gem file for Ruby and Rails development.
As far as what my group will do, we can start by figuring out and prioritizing what features will make Informix more attractive to developers/programmers. It's not just features in the server. It has to consider everything. Even documentation.
I'm sure I'll have more to say about this later this year. Hopefully I'll have interesting results to report by the time I see some of you at the IIUG conference in April.
I'm currently in Paris in the second week of a business trip. For a two-week trip it is pretty common to have some clothes laundered otherwise this makes for a lot of stuff to lug around.
I took a look at what was offered at my hotel: To launder one shirt (men), they charge 8.50 euros (around 12.37 US dollars). As I was leaving the hotel, I saw a hotel employee with a laundry bag in her hands. Looking at the size of the bag, I could just imagine the small fortune spent by the guest.
As I was walking to the IBM office, I passed a dry cleaner that advertized the cleaning and pressing of men shirts for 2.20 euro per shirt for 5 shirts. The price at the hotel was over 3.8 times that price. With a little knowledge a a 5 minute walk, the hotel guest could save a significant amount of money: for 5 shirts the price goes from 42.50 euros to 11 euros. For a company with a lot of employees that use that type of service, this can add up to significant savings.
Of course, that made me think of Informix. It is well known that IDS provides a high level of performance and scalability and require minimal resources for its administration. In some cases, one database administrator can manage thousands of instances. Of course it is much easier to go with a safe choice, use as much hardware as needed, and hire as many employees and consultants as the situation requires for the management of the environment and business application development. This is simply the cost of doing business...
It seems to me that with a little knowledge and a little effort, that cost of doing business could be greatly optimized.
I think Terri is pulling my leg. She is apparently receiving concerned emails about what happened in Brussels. It was a humorous situations that I wanted to relate in a fun way. I guess I have a future in fiction writing :-).
Really, nothing happened. She took a picture, the police courteously told us that the American embassy did not want people to take picture. Terri deleted the picture from her camera while having a pleasant time with the officers. We then left and laughed about it.
So, don't worry, Terri is doing fine and we all had a good time in Brussels. I strongly encourage people to come and visit.
I'd like to come back to the book "The Goal" I mentioned in my last blog entry.
This book focuses on manufacturing environments but the interview at the end of the book mentions that the concepts of the theory of constraints (TOC) can be applied to other fields. Looking back in teh book, I found that they ask three basic questions about the impact of changes:
- Did you sell more?
- Did you reduce the number of people on the payroll?
- Did you reduce inventory?
We can easily see that this makes sense to a financial person in manufacturing. Let's see how we can look at it when our concern is running a database.
Did you sell more?
That could be a tough one because sometime it is difficult to tie what we do to the company sales. that reminds me of a need analysis I did early in my career. The drafting department wanted to get a CAD system. At the time, that represented an investment of around one million dollars. I asked: "What happens if the plans are late?". I got blank stares as a reply. I should have talked to their customers to find the answer. We shold always ask what happens if we take longer to do something or if we don't do it. Here's a great quote:
"The cheapest, fastest and most reliable components of a computer system are those that aren’t there"
Gordon Bell, Encore Computer Corporation
Did you reduce the number of people on the payroll?
That's a question we always try to avoid but the bottom line, this is a question that is considered. Don't forget that if we can sell more with the same number of people, that's the same as reducing the payroll.
I've met many customers that have a mixed environments where we see a 10-1 ratio of Informix personnel compared to the personnel for the competitor's platform. Why not bring that up to the appropriate people. I'm sure your local IBM representative will be happy to help.
Did you reduce inventory?
Dr. Goldratt (author of "The Goal") says that investment is the same as inventory. So, what investment is made to increase sales? What is the return on investment? This seems to be a great opportunity to talk to people that use other DB products: How much are you investing in people to run these systems? What could you save there? How much are you investing in hardware? Could that be reduced? How much in software? I've heard that people that add Informix to their environment can get significant discount from their other DB vendor. That represents a reduction in the investment.
I think these three questions are worth exploring no matter which environment you're in. That can be good for your company, for you, and for all the people that invest their efforts into the Informix products.
The machines configurations caused problems in using Data Studio with WAS CE, I already mentioned that yesterday. This also meant that we could not do the web services lab. To work around this problem, I spent a few minutes showing the students what was involved in creating a web service using the vmware image on my laptop. Of course, it took a lot less time than would be required to do the lab since everything was already setup.
The rest of the class went well. It included a review of the enterprise features such as backup, SDS, HDR, RSS, CLR, ER, CDC (Change Data Capture), and MQ integration. I think we should add a lab on shared disk and HDR since the labs appear to be very well received. They are more fun than just sitting there listening to a speaker. The class ended with a prsentation on cloud computing.
I went through the evaluation and found that the class was a success. I know there are a few adjustments but it was a good start. All in all, it was a good few days.
I took the train to Paris. It takes around 2 hours 15 minutes to cover the 500 kilometers between Strasbourg and Paris. That's an average of over 220 km per hour. The ride was so smooth. It is interesting to note that a plane ride would have taken one hour but the train is actually faster since you can get there just a few minutes before departure and it drops you off in the middle of Paris instead of the "far away" Charles De Gaulle Airport. That's a reminder that we should always use the right tools for the right problem :-)
You may not know but the Informix lab is extending a helping hand to universities around the world. One example of that was the hosting of university professors at the last Informix conference.
As part of this, I am on my way to the university of Strasbourg (France) to teach a 3-day seminar on subjects related to IDS. I had all the latitude I wanted (and more) to decide on the content. I will be delivering this seminar starting next Monday (June 8). We'll see how it is received. Watch for my blog entries after each day, network access permitting.
I came back from the Informix conference Thursday night and woke up thinking about an analogy about why we use Informix Dynamic Server. More on that in a minute.
I've been using databases for a long time. I believe that the first formal database system I used was back in 1984. It was a hierarchical database. I developed an inventory system for the Canadian Coast Guard. Over the following years, I used and supported multiple databases systems some looking more like C-ISAM and others relational. I still remember the good old days where I had to debug Oracle installation scripts :-)
So, why Informix? Isn't a database a database?
I uses to use a car analogy: people buy cars and they are used to what happens to it: If they have to go to the shop to get it fixed or tunes every other month, that's just the way cars are. Who would believe that you could buy a car and only have to put gas in it for years after years without having to waste time in the shop? the car is used to get you from point A to point B day after day. It almost makes it invisible but not quite since you still have to drive it. It's not the same with a database system: it can really be invisible.
I woke up Friday with this thought: You can write just about any application in any computer language you want. Why don't we all use COBOL. Way back, I know a guy that could do EVERYTHING in COBOL. He was even doing system programming! An object oriented version of COBOL has been available for years buy why. Isn't the "vintage" version of COBOL good enough? If I'm not mistaken, the number of COBOL lines of code in production still surpass any other programming language. That should be enough of an argument to standardize on it.
It seems to me that many people apply this line of reasoning to database systems. The trend is to look at databases as commodity. Who cares that one barely requires any attention? Who cares that it provides easy continuous availability? Who cares that it has great storage optimization? The difference is only more overhead. that translates only into more costs. Those significant costs are easy to hide so why worry about them. Everybody does it so no need to be more efficient...
Well, me, I'm old school. I come from an era where memory was measured in kilobytes and disk drives in megabytes. Yes, memory is much bigger now and not that expensive. Disk drives are so much bigger and not very expensive. Computers are so fast now. It seems to me that we should stop the insanity and pay attention to efficiency. Isn't that what cloud computing, virtualization and being green is all about?
No matter how I try to slide it, to me, Informix is number 1.
I mentioned the Informix warehouse in my previous entry. There is the chat with the lab coming up. Here's something more: a new tutorial on DeveloperWorks:
Get started with Informix Warehouse Feature, Part 1: Model your data warehouse using Design Studio
Then there are the informix Warehouse product pages:
A special online event with live webcast is scheduled for February 25
Instead of repeating what is posted elsewhere, let me introduce you to Spokey Wheeler's blog. If you don't know about it, you may want to start visiting this blog regularly. Here's the link to Spokey's blog entry:
Spokey's blog on the Data in Action virtual event
Read the entry and register to the event!
The Informix team is putting a lot of energy behind this conference. The team is also putting together a Customer Advisory Council meeting on June 2nd where there will be discussions on product directions and features prioritization.
For more information on the conference, please see:
The call for speakers is going on until February 13. This is a great opportunity to participate with the EMEA Informix community and get some exposure for yourself and your company. Take advantage of it.
Find out more at the URL mentioned above. Like it says on that site: Register Today!
The other day I put out a SPL function that converts a DATETIME into a Unix timestamp (number of seconds since Jan 1, 1970). I needed this function as a compatibility function. In the blog entry, I should have called it UNIX_TIMESTAMP. It manes sense to have the reverse function that takes a timestamp and returns a datetime year to second. Here it is:
CREATE FUNCTION from_unixtime(secs int)
RETURNING datetime year to second
WITH (NOT VARIANT)
DEFINE ival interval hour to second;
DEFINE dt datetime year to second;
DEFINE days, hh, mi, ss integer;
LET ss = MOD(secs, 60);
LET secs = secs / 60;
LET mi = MOD(secs, 60);
LET secs = secs / 60;
LET hh = MOD(secs, 24);
LET days = secs / 24;
LET ival = (hh || ':' || mi || ':' || ss)::interval hour to second;
LET dt = EXTEND((MDY(1, 1, 1970) + days), year to second) + ival;
Anybody knows what is Informix 3.30.12Z is?
I got a call yesterday to help with a customer that had that product. To give you an idea of what it is, I received the following header:
Informix Master Menu ........................................Informix Version 3.30.12Z
Copyright (C) 1981, 1982, 1983, 1984 Relational Database Systems, Inc.
Software Serial Number . . .
Sorry for the ..., I did not know how to insert significant spaces
For people who don't know, Relational Database Systems was the name of the company before it was renamed to Informix Software. With a copyright going to 1984, that means that the product is 24 years old! Add to that no updates and support.
I had to contact Jonathan Leffler to find out what this was about but one thing is clear, Informix (Relational Database Systems, Inc) has a long history of reliability and dependability.
It is finally here: The information on demand conference.
It started Saturday with the Informix Customer Advisory Council (CAC) meeting. The CAC is a set of customers that get together with the Informix development team to discuss different aspects of the products and their directions. The meeting was full of interesting information, most presentations included the word "confidential" at the bottom of each slide... Makes sense for presentations that talk about roadmaps and future features.
There was one (non-confidential) thing that I did not know about that I think everyone should know about: The improvements on the Information center.
If you go to the information center at: http://publib.boulder.ibm.com/infocenter/idshelp/v115/index.jsp you will find two new things in the welcome screen:Subscribe to the information center updates (RSS feed)download the new search plugin
The RSS feed gives you a way to know when new information appears in the information center. That is useful since you don't have to go through the entire site to find out. You never know when some new information could give you a solution to your business problems!
The other one is very interesting. You can install a small plugin in your browser so you can search the information center from there without having to go to it. Guess what I did after the meeting... The installation is so quick that I thought nothing happened. Now, in my search box in the upper right corner of my browser, I can do start a search that will give me the results form a search in the information center. By the way, the plugin is available for IDS 11.50, 11.10 and 10.0.
Lately, I've heard a lot about cloud computing. It starts with fast networks and technologies such as Mashups and REST interface. A mashup allows you to take information and processing coming from different sources and put them together to provide a new application and new insights based on the data manipulation.
REST (REpresentational State Transfer) is a way to access data, possibly through a web service, using the HTTP protocol (Web server protocol). This makes it simpler than using the SOA SOAP interface.
Now, providers like Google and Yahoo! provide email and storage services. I even hear that, with the help of virtualization, some companies are looking at providing a "web stack" that would provide the ability to run company businesses remotely. That sounds like a new twist to service-bureaux of olds.
There are several issues around this starting with response time and the complexity to provide what companies really want but it seems something that could take off. There are a lot of talks about web applications so why not a web IT infrastructure? That has a potential benefit of centralizing all the administrative needs.
I think this is something worth discussing. What do you think? Post comments to my blog to let me know.
One of the first applications I ever used (after lunar landing) was called Eliza. It was at a time where terminals wrote on paper and 110 baud transmission rate was state of the art. The program would start with the following statement:
I am Eliza. Tell me about your problem.
If you proceeded with a coherent conversation, you'd think you were talking to a real person.
I read, much later, that some people, after starting the dialog with Eliza, asked for privacy to continue their dialog.
Eliza was designed in 1966 (I did a google search on it). The IBM S/360 came out two years before. All that to say that there are a lot of amazing things that people have done with computers since the first computer was invented. Many people have a tendency to discard what has been done in the past in favor of the latest technology.
From time to time, I'll suggest some reading on technology in general. The first on I want to suggest is about programming. It first came out in 1986. The second edition came out in 2000:
Programming Pearls, second Edition
Jon Bentley, ISBN 0-201-65788-0
Hope you’ll enjoy it.
The first entry went in almost without problems. I had to quickly add some formatting after I posted it to make it more readable. Over time, I'll figure out how to do this and make my formatting fancier.
I'm new to blogging. I'm also pretty new at reading blogs. With the new Web 2.0 technologies, there is no need to go to a blog every so often to find out if there is something new. So, in case you don't know, here's some information on dealing with blogs.
Blogs support a capability called syndication. This allows to create what's called a feed to warn people of changes. There are two main types of feeds: RSS and Atom. No need to know more about it for now. Just that you can use a feed URL to get the changes in a syndicated site.
I would suggest that you use a feed reader. Why? because there are many sites you may want to subscribe to. For example:
- Guy Bowerman's blog (http://www-128.ibm.com/developerworks/blogs/page/gbowerman)
- Madison Pruet's blog (http://www.ibm.com/developerworks/blogs/page/roundrep)
- Feeds form the IIUG site (http://www.iiug.org/rss/index.php)
- Informix Zone site (http://www.informix-zone.com/)
- and many more. . .
Take a look at the latest IIUG newsletter (insider #94) for more in the Informix Resource section.
Having to visit each site regularly to see if it has new stuff can be time consuming. Using a feed readeraggregates all those and lets you know what's new. That's the way to go!.
If you do a search on the web, you can find multiple feed readers. I did not want to spend too much time figuring it out so I downloaded a Windows-based open-source product that will do until I find or am told about something better. Check out: http://www.feedreader.com/
If you haven't done it already, set yourself up and stay informed on the latest entries.
That's it for the introduction to this blog. Now it's time to dive into Informix and Computing![Read More
Back in around 1988, I decided it was time for me to learn about object-oriented programming, design, etc.
Learning C++ was not too bad but when it came to define problems in an object oriented way, I started to panic: Had I reached the limit of what I was able to learn? Had I been passed by technology? Was I now a dinosaur? (and you thought the title referred to something else...)
It turned out ok... I think :-)
Today, the rate of change in technology has been accelerating and does not seem to be slowing down. To make things worst for database people, we've been told for years that database are commodities and they are just a persistent storage. I may expand on that later but let's just say for now that I totally disagree.
This being said, why start a blog? For one, I want to communicate with the Informix community in a more continuous manner and on subjects that may not require a one-hour powerpoint presentation. I want to discuss any technologies that is remotely related to databases and I want to start a continuous dialog with the Informix community on any subject of interest.
This blog will be in part educational (I hope) and also a place to discuss business problems and potential approaches to solutions. I believe that DBAs are experts in optimizing database access. It is time to expand DBAs impact in the enterprise to improve data processing. There is no need for a DBA to become a programmer. It is an issue of getting involve in the analysis and design of new applications. For programmers using Informix, let's start talking about what you are trying to accomplish. We may be able to find a better approach specially considering the IDS new features and database extensibility.
Please comment on my blog entries and send me your questions and let's start talking![Read More
This has been in the works for quite a while but now it’s out!
This new version adds multiple interesting new features including:
Streaming data to Excel
Easy setup for high-availability
Resilient processing with the consistent region annotation
Streaming data to Microsoft Excel makes it easy create user interfaces to get real-time feedback on what’s happening in addition to providing all the capabilities from Excel to do additional processing on the data received.
A lot has been done on the high-availability front. It is much easier to setup redundant administrative services and have them failover automatically when needed. In addition, there is no need for a DB2 database. Instead, Streams now relies on Zookeeper to preserve all the state information. Also,to continue to improve on high availability, Streams does not require a shared file system anymore.
There is a new feature that guarantees at least once processing a tuples within a region or a set of operators. It is easy to use. We simply have to add annotations that define the region and set a few parameters.
There has been enhancements to existing toolkits and addition of new ones such as support for Kafka in the messaging toolkit and the new HBase toolkit.
There is more to the new release of Streams. You can find the online documentation in the knowledge center at:
To get an idea of what’s new in this release, the a look at:
The general session started with an example of context computing and an interview with Captain Phillips.
All that was pretty exciting but what stole the show is the announcement of the partnership
between IBM and Twitter for analytics.
Then I went on my way to attend Streams sessions talking about use cases.
The first one i attended is about a partner, Voci, that has a appliance that converts audio to text.
In addition, it adds additional metadata such as the type of voice, accent, sentiment.
This solution can be augmented with InfoSphere Streams and BigInsight to take actions in real-time.
The next session was a panel of expert on geospatial analytics.
In the afternoon, I attended a session on the features of the new Streams beta that was announced last Friday.
You can find more information at http://ibm.co/streamsdev.
I followed with a session on context computing used to counter fraud. I finished my day
with a panel of users.
The conference is winding down with the last day tomorrow.
Another full day.
It started at 7:00 with a breakfast meeting and was followed by a conference call.
I then went to the conference bookstore for a book signing activity and moved on to a customer lunch.
As I mentioned in other blog entries, my new book is now out, at least at the conference:
"The Power of Now: Real-Time Analytics and IBM InfoSphere Streams"
My afternoon was taken by a Streams and text analytics lab.
I went back to the conference floor and had interesting conversations with many technical people
from different world regions. The conference sure provides great opportunities.
I'll be able to catch up on some Streams sessions Tomorrow. I can't wait to hear about some customer/partners stories
Also, I heard through the grapevine that there my be a big announcement at the general session.
I'll make sure not to miss that either.
After walking by 3 different Starbucks, I arrived at the conference breakfast hall.
I thought I would have a quiet breakfast by myself when I saw Bruce Brown, a big data partner expert.
Soon after, I was sitting others joined us: They were long time InfoSphere Streams experts. That was a great opportunity to talk shop and exchange information.
Then it was time to attend the general session that started at 8:15.
The session started with Jake Porway and Jeff Jonas talking about context computing.
The session was so packed with information that it is impossible to summarized properly.
Lets just say that Bob Picciano talked about three imperatives:
Data is the new natural resource, basis for business advantage
Systems of engagements
Multiple speakers expanded on these themes.
I particularly likes the line: "Geospatial data will become analytics superfood".
There were many interesting sessions to choose from but because of multiple engagements, I only attended
the Joy Global session where they described the real-time analytics they while monitoring mining equipment.
There was so much, if you are not at the conference, you may want to look for InsightGo to be able to attend some general sessions remotely.
Now it's time to move on to Tuesday!
The event went as planned at the Mandalay Bay convention center with presentation on:
Internet of things
Informix gateways and Informix capabilities for the internet of things
IBM Internet of Things foundation
Real-time analytics with Streams in the context of an internet of things architecture
Many people attended and were engaged in the presentations. Overall a success.
The Insight conference officially started with the opening reception.
We are getting ready for a great week of learning and networking.
We're up and going.
The conference is still being setup but there are events happening this Saturday.
This morning I was participating in the "Big Data and Analytics EdCon". This is part of an education session for faculties
offered under the IBM Academic Initiative. This was a hands on session introducing InfoSphere Streams and it was full!
All sorts of other sessions are taking place in other areas of the Mandalay Bay convention center.
Tomorrow, I'll be part of the "Internet of Things Deep Dive" as I mentioned in my previous blog entry.
The deep dive goes from 11:00am until 5:30. There is still time to register for it:
If you are already in Las Vegas for the Insight conference, this would be a good use of your time.
Finally, Sunday evening, the Insight conference officially starts with the Solution EXPO Grand Opening Reception
starting at 6:00pm.
I'll post comments on the conference daily so, stay tuned!
We are barely more than two weeks away from the Insight conference.
As I mentioned in my previous blog, lots of interesting sessions on Streams. Still there is more.
As you know, Streams is excellent at providing real-time analytics. It can be used with other
products to provide a solution in many domains. One of them is the Internet of Things (IoT).
It happens that I'll be participating in an IoT deep dive on Sunday October 26.
I'll be joining the main speakers:
Michael Curry, Vice President, WebSphere Product Management, IBM.
Jerry Keesee,Director, Real-Time Context Computing, IBM.
Jeff Jonas, IBM fellow and chief scientist, context computing
The technical section is divided in three parts:
Kevin brown talking about sensors and gateways
Peter Crocket telling us about the IBM IoT Foundation
Jacques Roy covering data-in-motion with Streams
You can register for the event at: http://insight-deep-dive.eventbrite.com
Don't forget to come see me at Insight in my sessions and labs as well as a book signing
session on Tuesday October 28 at the Insight Conference book store between 9:30 and 10:30.
The book is: "The Power of Now: Real-Time Analytics and IBM InfoSphere Streams"
See you in Vegas!
Ok, this is probably not news to you but there is information you should know.
The Insight conference, formerly known as Information on Demand (IOD), is going on Oct 26-30.
This is only 35 days from now! There is a lot of good content. Fro me, it starts on Sunday with an IoT deep dive call/meeting.
From there, I'll go to the demo ped to spend my evening. Please come visit
For the week, I am particularly interested in the Streams sessions such as:
Just to name a few. I am involved in a few sessions:
LCI-4252A: Hands-on lab "Streams and text analytics" on Tuesday afternoon (2:00pm)
LCI-5454A: Hands-on lab "The Internet of Things and Geospatial Analytics Powered by InfoSphere Streams", on Thursday morning (10:00am)
IIS-7096A : Expert Exchange "How to Harness the Internet of Things"
The other exciting part for me is that I am coming out with a new book:
"The Power of Now: Real-Time Analytics and IBM InfoSphere Streams"
I am doing a book signing on Tuesday between 9:30 and 10:30.
The Insight conference provides many excellent learning opportunities on many subjects including Cloud, mobile/Social, security, analytics, and more.
It is also a great opportunity to network with experts from IBM, partners, and other customers.
I'm looking forward to see many of you there at the Mandalay Bay in Las Vegas.
For more information on the conference, please go to the following web site:
When we talk about processing data in real time, it is easy to just write a program and be done with it.
The problems start piling up when we add analytics and volume.
A program is easy to write when it can process records sequentially. Once you reach the limit of this sequential processing, you start adding complexity that may represent the bulk of your work: You start by using multi-threading and eventually you need to also go to multi-processing to take advantage of multiple machines. It is much easier to use a framework to reduce those issues.
Still, a framework may give you the ability to distribute your processing but how easy is it to do? Now you want proper tools to assemble the many operations that you want to link together. Then, you also need to have the tools to easily identify bottlenecks so you can parallelize you operations. What about all the standard operations you would expect to be able to do?
This is where a platform comes in. It gives you the foundation for distributed processing but also gives you pre-built capabilities to interact with the outside world (files, message queues, databases, and so on) and also analytics so you don't have to reinvent the wheel.
For a more complete discussion on the subject, take a look at my two articles on the IBM Datamag site: part 1 and part 2.
InfoSphere Streams is starting to engage the open-source community to provide additional capabilities to its real-time analytics platform.
This is still very early in the process and we can assume we'll see evolve quickly. That may also be a way to consolidate
the offering of the most popular open-source toolkits currently available on the Streams Exchange.
One of the projects is under the name resourceManagers.
The current available resource manager that is available to support Streams is Yarn!
Learn more about what is available for Streams on GitHub by looking at the newest page from the InfoSphere Streams playbook:
Streams on GitHub.
Anyone remembers this cartoon? I think the first time I saw it was in the '80s. Still, it keeps coming back.
This used to apply to IT requests. It can also be applied to all sort of things, including how quickly you want to go from data to actionable information.
In Today's world, it seems that we need to get insights now. This is one reason for the rise of the interest in "data in motion".
Real-time analytics apply in many industries including medical, telecommunication, and security. You can find additional examples in the
following article: Big Data in Motion Where? Everywhere.
There is a special need in processing machine data. The data can be generated at such a rate that we need machines to analyze all that data.
You can find more information on machine data examples in the ebook: The Rise of Machine Data: Are You Prepared.
Data in motion processing is here to stay. It is a great approach to solve many business problems. Of course, this approach does not work in a vacuum.
It is a great complement to new and established systems based on data at rest. Here, I mean systems that use data repositories such as operational
data stores, data warehouses, Hadoop (BigInsights) and other NoSQL repositories.
The IBM solution for data in motion is InfoSphere Streams. You can download a free copy of the software to learn about it.
It is called the InfoSphere Streams Quickstart Edition. Visit the streamsdev site to download a copy of it and access an introductory lab (under Docs).
Do you know about IBM Data Magazine? It is the regular newsletter based on ibmdatamag.com that many people receive in their inbox
every few weeks (or is it weekly?).
This online magazine contains articles related to: Big Data and Warehousing, Databases, Information Strategy, Integration and governance.
There are multiple regular columnists and I am now one of them. I am covering Data in motion in a monthly column.
My first article got published on January 31st and is titled: "Getting the big data ball rolling".
You can find it at: http://ibmdatamag.com/2014/01/getting-the-big-data-ball-rolling/
I have put together a plan for a series of articles. When it gets more in depth, I will complement the articles with
my blog entries. I will also continue to cover other subjects and likely more technical subjects in this blog.
Hopefully this will get me to write a blog entry a bit more regularly than I've done lately.
Until next time...
I have to say, these are busy times!
With TimeSeries PoC and multiple activities around Streams, time flies by quickly.
It's been a while since I updated the InfoSphere Streams Playbook. This was overdue. There are new videos, training material and capabilities that were not reflected in the playbook. Here's what I updated:
In this section, I updated the databases supported and support for MQTT
There is now a link that should provide the complete lists of available videos dynamically. Also, I cleaned up the tutorials and added a brand new series of tutorials.
Video use cases
Some new youtube videos that show interesting use of Streams
With the end of the year so close, we can expect everyone to prepare for the new year. Looks like 2014 will be another fun year!
The other day I ran across an article on Infoworld.com: Cloudera pitches Hadoop for everything. Really?”
Of course, the article starts by mentioning the expression about hammers and nails. This is an old story and it appears that it is getting ready to repeat itself. Like it’s been said: “those who forget the past are doomed to repeat it”.
Hadoop has been the biggest star of the big data story. I have to say that it is revolutionizing data processing and for good reasons. Many seem to point to the use of cheap clusters based on commodity hardware. I personally prefer to attribute it to the large amount of data that has different requirements from traditional data processing.
The traditional data processing needs are still there and still growing. Getting rid of “silos” of data has proven extremely difficult. It also relies on getting rid of years of investments and re-writing many proven applications.
Instead of trying to fit everything into Hadoop, it is much better to have an overall strategy that takes into accounts the different needs of different data sets and make sure the overall architecture accommodates exchange of information between all of them.
Cloudera want to become the “enterprise data hub” powered by Hadoop. Like the article mentions, “Hadoop i still seen on all sides as a bucket of parts..”. Maybe it is a bit early to talk about an enterprise data hub based on Hadoop.
Of course, if all you have is a hammer, everything looks like nail
There is now a new resource for Streams: https://www.ibmdw.net/streamsdev/
The Streamsdev site includes articles, blog entries, videos, and intro labs. You can also get to the download the latest quickstart edition of Streams from there. This way, you can download either the product or a vmware image with it and do the lab at your leisure.
This site is put together by developers for developers. Still, if you are new to InfoSphere Streams, you can find something there for you too. Just go to the getting started section under "Docs".
Since the IBM Information on Demand (IOD) conference starts this weekend, you can also find information on the activities (labs, presentation) on Streams during the conference. You can see the next few acticities on the mainpage or a more complete calendar under events.
This site is evolving. You should go look at it at least once a week to see what's new.
Hopefully many of you are going to the IOD conference next week. Enjoy the conference and learn a lot!
Last week, on October 22, IBM announced a new version of InfoSphere Streams: version 3.2.
This follows version 3.1 that was announced on May 21.
The new version includes some nice improvements such as remote development, Rest API for data access, and improved toolkits.
Over the next few blog entries, I'll go into more details on these features. In the meantime, you can find information on
InfoSphere Streams 3.2 at:
If you are interested in trying Streams, IBM provides the quick start edition that you can download as native product or
as a VMWare image. you can download it at:
Of course, you may need more information on how to use Streams. You can start by browsing through the InfoSphere Strreams Playbook at:
If you have questions, don't hesitate to drop me a note or comment on my blog entries.
Until next time!
If you've been following my blog over the last few years, you can notice a few things lately:
I have not blogged in a few months
My blog's name has changed
The significant part is really the name change. It went from "Informix and Computing" to "Big data in motion".
Let me first address the Informix part. Yes, I am still involved with Informix activities. In fact, I am currently working on a proof-of-concept for Informix TimeSeries that involves technologies such as Java, kafka, zookeeper, fastjson, messagePack, and more. So, Informix continues to be involved in "Big Data" and its use with other current technologies.
Will I continue to talk about Informix? Probably. It all depends if I believe I have something interesting to say on the subject. As long as I have activities with Informix I have opportunities to find interesting information.
Now. What about "Big data in motion"?
A while back I decided to go back to my old team: Worldwide Technical Sales and Enablement.
My main focus is now on InfoSphere Streams. This has already been an interesting ride. I've worked on multiple projects that include putting together an extensive training session, work on PoCs, writing DeveloperWorks articles, and more. I've even put together a DeveloperWorks wiki that centralizes all sort of resources related to InfoSphere Streams. I called it the InfoSphere Streams Playbook.
InfoSphere Streams is part of an overall "Big Data" architecture. There are many ties between Streams and the BigInsights platform and any other technologies that help getting big data under control. Yes, that includes Informix. It also includes many other technologies.
My focus may be mainly on "in-motion" data but the entire "Big Data" solution stack eventually interacts with it. That explains the new blog title.
As usual, I want to continue "casting a large net" so I can be free to talk about anything I find interesting.
So, drop me line, post comments. Let's continue a dialog that will help everyone (including me) learn new things and continue to have fun with our technological challenges.
A few years ago, IBM started talking about a smarter planet: Instrumented, interconnected, intelligent.
We are seeing more and more uses of sensors starting from your smart phone ant its many sensors (GPS, proximity, temperature, barometer, etc) to electric meters at your house. Add to that all the other sensors used in many industrial plants and even sensors on rails!
How can we convert this deluge of data into information?
This leads to issues related to two ways to handle data: in-motion and at-rest.
It happens that IBM has a mix of products that can handle these two "states" of the data:
For data in motion, we can use InfoSphere Streams for real-time analytics based on more in depth analysis on historical data (analytics models).
For the data at-rest, there are problems of how fast we can store it and how fast we can retrieve the information, specially when it concerns many users making requests. This would be an operational data store environment. Then, of course, there is the issue of "in-depth" analysis that requires fast access of large amount of data.
Informix has the combined solution with its TimeSeries capabilities and the Informix Warehouse Accelerator.
Learn more about the use of Informix to solve this big data problem in the following webcast:
Solving the Big Data Challenge of Sensor Data
Date: June 26, 2013
Time: 1:00 PM EDT / 10:00 AM PDT
Register at: https://event.on24.com/eventRegistration/EventLobbyServlet?target=registration.jsp&eventid=641115&sessionid=1&key=AA3293E3AC9715CF3D602D0DEAE4D52B&sourcepage=register
The new Informix, version 12.10 was announced last week. It is time to start talking about the new features in TimeSeries.
The Informix team has added a public version of a fast loading mechanism. It allows to load into existing TimeSeries that are defined as part of a container.
This loader API was previously undocumented. It was only available to use as part of the Tooling. A lot of work went into it since its internal implementation. You should not try to use the older internal version since it disappears in 12.10 in favor of this new one.
You can find a description of its use in the "Informix Smart Meter Central" in the page Loading fastest with the loader API
You should also refer to the Informix documentation for more details.
Since the Loader API is an SQL API, it can be used by any clients including InfoSphere Streams.
For more information on how to use Streams with the loader api, please see the Informix Smart Meter Central wiki: Streams and the TimeSeries Loader API
More to come. Don't forget, the IIUG conference is just around the corner. This is the perfect place to learn about all the new features in Informix 12.10: Simply powerful.
We are seeing more and more interest in using both InfoSphere Streams and Informix together.
This is in the context of "Big Data".
InfoSphere Streams is a platform that allows you to add operators as you see fit.
In our case, there are already a few operators that can be used to read from or write to Informix from InfoSphere Streams.
There is a new DeveloperWorks article that describe how this could be done. With these basic examples you should be
able to integrate Informix in a Streams environment (or vice versa) in no time.
The Informix development team has put a lot of efforts over the last year or so to continue to improve the product capabilities.
We strongly believe that this new release will help everyone, customers and partners alike, address the challenges and changing needs of data management.
Will it be faster? Will it be easier to manage? Will it include new functionality? Will it be smarter to accommodate a smarter planet?
What about big data and analytic?
You're in for a treat! Here is the webcast information:
The New IBM Informix: It's Simply Powerful
Date: Tuesday, March 26, 2013
Time: 10:00 AM PDT
Don't miss it.
I dare add to this, to me, the new IBM Informix, it's simply wonderful!