This was the last day of the conference with a 35 sessions. I was surprised to see how many people attended the presentations until the end. I see this as a big endorsement of the value provided by these presentations.
On my part I delivered one presentation first thing in the morning and another one starting at 2:10 pm. Despite that, my session was well attended.
Overall a very successful conference that was well worth attending.
I remember seeing something like this title in some Informix marketing material many years ago. I think it was related to the fact that IDS has extensibility features that allow developers to adapt IDS to their business requirements as the technology and needs evolve.
The "future built-in" idea came back to me as I was reading a computerworld article titled: "the desktop traffic jam" (see: http://www.computerworld.com/s/article/342870/The_Desktop_Traffic_Jam). In there they were talking about a new feature in Windows 7 (User Mode Scheduling) that lets thread multiplexing take place in the application instead of in the kernel. They go on to say: "Handling this multiplexing in the application instead of in the operating system kernel makes thread scheduling more efficient.
I know it's not quite the same but it is similar to the idea that IDS decides its thread scheduling, making it more efficient since it is will not re-schedule a thread that is in a critical section of code. This way it avoids having threads that get scheduled to find out that they have to wait. All that making the threading model more efficient. I wonder how difficult it would be to take advantage of thread multiplexing onto cores. Could it be as simple as having one CPU VP per core wih some "core affinity" of the operating system supports that? Then IDS would already be there... with the future built into it.
You may not know but the Informix lab is extending a helping hand to universities around the world. One example of that was the hosting of university professors at the last Informix conference.
As part of this, I am on my way to the university of Strasbourg (France) to teach a 3-day seminar on subjects related to IDS. I had all the latitude I wanted (and more) to decide on the content. I will be delivering this seminar starting next Monday (June 8). We'll see how it is received. Watch for my blog entries after each day, network access permitting.
I've been silent for quite a while. That does not mean I have not been busy!
A lot of efforts has been put on TimeSeries over 11.70.xC3 and 11.70.xC4 and we are still going full steam ahead. We continue to improve its performance, scalability, usability and functionality.
I wanted to put together a repository of information so people can find it all (or most of it in one place. For this purpose, I put together a wiki on developerWorks that is dedicated to The smart meter support. It is still a work in progress but I believe it is a good start. you can find it using the tinyurl: tinyurl.com/InformixSmartMeterCentral
Let me know what you think.
I was doing a search for some information on the web when I hit the following page:
This is the Informix Solution Portal. It includes links to:Informix business partner directoryInformix solution directoryInformix consultant servicesInformix OEM distributers
It also includes information on the IDS business partner program and a link with information on how to join the solution portal.
Take a look. If you are partner and you're not in there, you may want to take steps to get listed. If you're a customer, you may find something to make your life easier.
Here's something you may want to act on:
IBM Informix Survey for Continuous Availability White Paper
Complete the Survey to win an Apple iPad! - 1 week left
Every response matters. - Start Survey here!
IBM Informix is the database software voted #1 in customer satisfaction. Clients choose Informix because it is reliable, low cost, and hassle free. Solution providers choose Informix for its best-of-breed embeddability.
Yet, there are some people that still don't 'get' Informix, or realize the many benefits of deploying it. Help us gather data to support this claim.
- Informix is exceptionally hardware efficient, which means that (in the REAL world) you need to spend MUCH less on hardware to get the same performance as other products.
- Informix is exceptionally reliable, which means that (in the REAL world) you don't need to pay lots of people to make sure it stays 'up'.
- Informix is exceptionally scalable, which means that (in the REAL world) it can be idling one moment and then processing thousands of transactions the next with no apparent stress.
Advanced DataTools is working with Oninit, to gather data about what happens in the REAL world to support these assertions with empirical evidence in the REAL world.
The data collected will be used to compile a report that will be made available to every CTO, IT Director and IT Manager. Along with this, they will receive a list of all the major application vendors that are now porting their applications to Informix V11.5 and a document outlining the key reasons to choose Informix.
Every response matters. - Start Survey here!
Win an Apple iPad. One randomly selected participant in the data collection phase of this research will win an Apple iPad, provided by Advanced DataTools Corporation, an Advanced IBM Informix Business Partner.
Note: Public Sector Employees are not eligible
In case you missed the announcement this week, IBM announced the availability of the Informix warehouse as of March 5, 2009. Here is a quote from the announcement this morning:
From Kevin Brown, lead architect for IBM and Jim Kobielus from Forrester Research:
"This can save weeks of effort into just a few hours," Brown said. "In addition, customers often did without information because of the cost of effort to get the information. The lost opportunity cost savings is harder to quantify, but can be significant once they use their warehouse platform for smarter decision-making."
Take a look at the press release at: http://www-03.ibm.com/press/us/en/pressrelease/26840.wss
See also the Informix Warehouse page on the ibm site:
Last week, on October 22, IBM announced a new version of InfoSphere Streams: version 3.2.
This follows version 3.1 that was announced on May 21.
The new version includes some nice improvements such as remote development, Rest API for data access, and improved toolkits.
Over the next few blog entries, I'll go into more details on these features. In the meantime, you can find information on
InfoSphere Streams 3.2 at:
If you are interested in trying Streams, IBM provides the quick start edition that you can download as native product or
as a VMWare image. you can download it at:
Of course, you may need more information on how to use Streams. You can start by browsing through the InfoSphere Strreams Playbook at:
If you have questions, don't hesitate to drop me a note or comment on my blog entries.
Until next time!
We are seeing more and more interest in using both InfoSphere Streams and Informix together.
This is in the context of "Big Data".
InfoSphere Streams is a platform that allows you to add operators as you see fit.
In our case, there are already a few operators that can be used to read from or write to Informix from InfoSphere Streams.
There is a new DeveloperWorks article that describe how this could be done. With these basic examples you should be
able to integrate Informix in a Streams environment (or vice versa) in no time.
InfoSphere Streams is starting to engage the open-source community to provide additional capabilities to its real-time analytics platform.
This is still very early in the process and we can assume we'll see evolve quickly. That may also be a way to consolidate
the offering of the most popular open-source toolkits currently available on the Streams Exchange.
One of the projects is under the name resourceManagers.
The current available resource manager that is available to support Streams is Yarn!
Learn more about what is available for Streams on GitHub by looking at the newest page from the InfoSphere Streams playbook:
Streams on GitHub.
Ok, this is probably not news to you but there is information you should know.
The Insight conference, formerly known as Information on Demand (IOD), is going on Oct 26-30.
This is only 35 days from now! There is a lot of good content. Fro me, it starts on Sunday with an IoT deep dive call/meeting.
From there, I'll go to the demo ped to spend my evening. Please come visit
For the week, I am particularly interested in the Streams sessions such as:
Just to name a few. I am involved in a few sessions:
LCI-4252A: Hands-on lab "Streams and text analytics" on Tuesday afternoon (2:00pm)
LCI-5454A: Hands-on lab "The Internet of Things and Geospatial Analytics Powered by InfoSphere Streams", on Thursday morning (10:00am)
IIS-7096A : Expert Exchange "How to Harness the Internet of Things"
The other exciting part for me is that I am coming out with a new book:
"The Power of Now: Real-Time Analytics and IBM InfoSphere Streams"
I am doing a book signing on Tuesday between 9:30 and 10:30.
The Insight conference provides many excellent learning opportunities on many subjects including Cloud, mobile/Social, security, analytics, and more.
It is also a great opportunity to network with experts from IBM, partners, and other customers.
I'm looking forward to see many of you there at the Mandalay Bay in Las Vegas.
For more information on the conference, please go to the following web site:
We are barely more than two weeks away from the Insight conference.
As I mentioned in my previous blog, lots of interesting sessions on Streams. Still there is more.
As you know, Streams is excellent at providing real-time analytics. It can be used with other
products to provide a solution in many domains. One of them is the Internet of Things (IoT).
It happens that I'll be participating in an IoT deep dive on Sunday October 26.
I'll be joining the main speakers:
Michael Curry, Vice President, WebSphere Product Management, IBM.
Jerry Keesee,Director, Real-Time Context Computing, IBM.
Jeff Jonas, IBM fellow and chief scientist, context computing
The technical section is divided in three parts:
Kevin brown talking about sensors and gateways
Peter Crocket telling us about the IBM IoT Foundation
Jacques Roy covering data-in-motion with Streams
You can register for the event at: http://insight-deep-dive.eventbrite.com
Don't forget to come see me at Insight in my sessions and labs as well as a book signing
session on Tuesday October 28 at the Insight Conference book store between 9:30 and 10:30.
The book is: "The Power of Now: Real-Time Analytics and IBM InfoSphere Streams"
See you in Vegas!
After walking by 3 different Starbucks, I arrived at the conference breakfast hall.
I thought I would have a quiet breakfast by myself when I saw Bruce Brown, a big data partner expert.
Soon after, I was sitting others joined us: They were long time InfoSphere Streams experts. That was a great opportunity to talk shop and exchange information.
Then it was time to attend the general session that started at 8:15.
The session started with Jake Porway and Jeff Jonas talking about context computing.
The session was so packed with information that it is impossible to summarized properly.
Lets just say that Bob Picciano talked about three imperatives:
Data is the new natural resource, basis for business advantage
Systems of engagements
Multiple speakers expanded on these themes.
I particularly likes the line: "Geospatial data will become analytics superfood".
There were many interesting sessions to choose from but because of multiple engagements, I only attended
the Joy Global session where they described the real-time analytics they while monitoring mining equipment.
There was so much, if you are not at the conference, you may want to look for InsightGo to be able to attend some general sessions remotely.
Now it's time to move on to Tuesday!
We're up and going.
The conference is still being setup but there are events happening this Saturday.
This morning I was participating in the "Big Data and Analytics EdCon". This is part of an education session for faculties
offered under the IBM Academic Initiative. This was a hands on session introducing InfoSphere Streams and it was full!
All sorts of other sessions are taking place in other areas of the Mandalay Bay convention center.
Tomorrow, I'll be part of the "Internet of Things Deep Dive" as I mentioned in my previous blog entry.
The deep dive goes from 11:00am until 5:30. There is still time to register for it:
If you are already in Las Vegas for the Insight conference, this would be a good use of your time.
Finally, Sunday evening, the Insight conference officially starts with the Solution EXPO Grand Opening Reception
starting at 6:00pm.
I'll post comments on the conference daily so, stay tuned!
The event went as planned at the Mandalay Bay convention center with presentation on:
Internet of things
Informix gateways and Informix capabilities for the internet of things
IBM Internet of Things foundation
Real-time analytics with Streams in the context of an internet of things architecture
Many people attended and were engaged in the presentations. Overall a success.
The Insight conference officially started with the opening reception.
We are getting ready for a great week of learning and networking.
Another full day.
It started at 7:00 with a breakfast meeting and was followed by a conference call.
I then went to the conference bookstore for a book signing activity and moved on to a customer lunch.
As I mentioned in other blog entries, my new book is now out, at least at the conference:
"The Power of Now: Real-Time Analytics and IBM InfoSphere Streams"
My afternoon was taken by a Streams and text analytics lab.
I went back to the conference floor and had interesting conversations with many technical people
from different world regions. The conference sure provides great opportunities.
I'll be able to catch up on some Streams sessions Tomorrow. I can't wait to hear about some customer/partners stories
Also, I heard through the grapevine that there my be a big announcement at the general session.
I'll make sure not to miss that either.
The general session started with an example of context computing and an interview with Captain Phillips.
All that was pretty exciting but what stole the show is the announcement of the partnership
between IBM and Twitter for analytics.
Then I went on my way to attend Streams sessions talking about use cases.
The first one i attended is about a partner, Voci, that has a appliance that converts audio to text.
In addition, it adds additional metadata such as the type of voice, accent, sentiment.
This solution can be augmented with InfoSphere Streams and BigInsight to take actions in real-time.
The next session was a panel of expert on geospatial analytics.
In the afternoon, I attended a session on the features of the new Streams beta that was announced last Friday.
You can find more information at http://ibm.co/streamsdev.
I followed with a session on context computing used to counter fraud. I finished my day
with a panel of users.
The conference is winding down with the last day tomorrow.
I was talking earlier about encapsulation and the collection of objects that can be found in another object. Let's look at another possibility:
A corporation has multiple regions, a region has multiple branches, a branch has multiple customers. To summarize:
Let's say that the customers are loans taken by different types of companies. To find out the average amount of the loans given out by each branch, the strict approach would be that each branch has a method (function) that does the following:customer_count = 0
total_loans = 0
for each customer
customer_count = customer_count + 1
total_loans = total_loans + customer.getLoanAmount()
end // for each customer
return(total_loans / customer_count)
We protect the encapsulation of customers by providing a method that returns the loan amount (getLoanAmount). The first problem we have relates to performance: All the customer objects for a branch need to be instantiated (created). That may require quite a bit of memory. The second performance problem is that each customer object instantiation requires one database call.
What about if we want to do this average at the region level instead of the branch level? Then, to preserve the encapsulation, we need to created additional methods to return totals and counts. I'll let you imagine the processing needed. On the performance side, we see that the number of objects instantiated and the number of database calls increase with the number of branches and customer objects processed.
If you can convince the architects and programmers to relax their encapsulation requirements, you could add one method at the branch level, one at the region level, and even possibly one at the corporation level to return the desired average. Considering the average for a region, the method would implement the one SQL statement looking like:SELECT AVG(loan) FROM customers
WHERE region_id = :region_num
GROUP BY region_id;
In this case, I don't instantiate all the customer (and branch) objects, saving processing and memory. It is pretty obvious that the performance of these requests will be greatly improved compared to the "strict" OO approach.
Having a method that uses the database to do the processing is one thing. What about more complex processing like the average risk taken by a branch on their loans?
IDS provides the ability to implement user-defined aggregates. It would be easy to implement the average risk function. The number of lines of code would be less than implementing it in the application and the performance would be better even if it was only because of the significant reduction in the volume of data transferred.
I hope that in the last few blog entries I gave you some things to think about to improve the overall performance of your systems. The bottom line is: get involved in the analysis and design phases of new projects. You can add a lot of value there.
Another year, another conference. It has been so busy that I have not had the time to write a short blog entry for each day. Here is my quick update.< /p>
It all started Saturday morning with the business partner council and the customer advisory council on Saturday. I attended the customer advisory council and I found it interesting and full of good discussions.
The conference was kicked off with an opening reception on Sunday night and we were off to the race. There were eight Informix sessions on Monday including presentations on how IBM helps Cisco, open source, hands on lab on high-availability, another one on the new features of Informix 11.70, bests practices for virtual environments, and performance enhancements. Of course, the most popular session was from Jerry Keesee titled: "Informix at IBM: The next decade".
The day ended with an Informix reception at the Mandalay Bay beach casino for an Informix 11.70 launch celebration and to start looking forward to the next decade of Informix at IBM.
Tuesday started early with an Informix "eat and meet" breakfast at 7:00am, followed by nine Informix sessions throughout the day. The sessions covered areas such as upgrade, new features, Informix warehouse, application development, 4GL, embeddability, flexible grid, and more. It was also interesting to hear about how Informix is used to run a steel plant.
The day ended with a beach party reception. Now it is on the Wednesday with another full agenda.
It seemed so far away and now it is almost here: IOD 2008 in Las Vegas October 26 to October 31. There are even some meetings on October 25th.
I'll be there of course. I am delivering two presentations:Java Best Practice with IDS
Thursday Oct 30, 8:30am - 9:30am, Mandalay Bay - Tradewinds CIDS and the Retail Integration Framework, including WebSphere Remote Server
Thursday Oct 30, 10:00am-11:00am. Mandalay Bay - Coral A
I also participate in the "Meet the experts" sessions. These sessions occur almost constantly throughout the conference.
There are a lot of good sessions happening that week. There is also the benefit of meeting so many knowledgeable people. That will be a tough week but it is worth it!
We had many good presentations on Monday and of course several impromptu meetings all over the place. Time is running short so I have to keep this entry to a minimum.
In One session I heard about Choice Hotel that has 6000 properties in 10 different brands. They strongly depend on IDS to run their business. In another one, I heard about peapod online grocer that also relies on Informix IDS to run their business. Finally, we also heard about the new IBM system x bundles for Informix where they provide tested configurations in "T-shirt" sizes (small, medium, large, X-large) to fit any businesses. They mentioned that such a configuration showed much better price performance than a Sun system running Solaris.
More to come on Tuesday.
I arrived in Vegas Sunday mid-afternoon. Already, the activities have been going on for a day and a half. The expo floor looks good with Informix demos at multiple locations including the blade server with Informix and the theater presentation showing, at least, the clustering capabilities that include SDS, HDR, RSS, and ER.
The evening reception was in two parts: one in the expo and a second one for specific section of the Information management portfolio.
This year I decided to stay at the Luxor, next to the Mandaly Bay. You can walt from one hotel to the other without going outside. To go from my room to the registration desk takes a little over 15 minutes. On my way, I passed 3 Starbucks. I guess a lot of attendees need that to go through the long hours we'll have this week.
I did not noticed a session on Wednesday. Luckily, I went to it Thursday morning. It was: "Tuning Informix in a Sandbox Environment" by Russell Glancy from GSN Digital.
Russell covered in details how a product from exactsolutions, iReplay, allows him to test new configurations, versions, and tuning in a safe environment using the same workload as his production machine. this way, he is knows exactly what will happen when he makes the changes to the production environment.
I also co-presented the session "Keeping costs low and maximizing flexibility for Jamaica using IDS" with Walt Brown, senior manager at FSL Jamaica. My role was mainly to introduce Walt and let him present his environment. Walt went into details about their environment and that they basically run all the Jamaican government systems, including tax collections that was even active and used during a hurricane.
There were several other sessions including:
A deep dive into the IBM Informix 4GL Service Oriented Architecture Feature, Gaga Mahesshwari, IBM
Dimensional modeling for IBM Informix warehouse users, Fred Ho, IBM, Sandra Tucker, IBM
Managing IDS configuration ans performance with server studio and sentinel, Keshava Murthy, IBM, Anatole Vichon, AGS Ltd
And several more... All that on the last day of the conference!
The conference is over. It is now time to go back to work.
In Arvind Krishna feature keynote titled "Reduce Your Data Management costs with Workload-optimized System", we heard about Cisco Systems. They mentioned that they chose Informix a few years ago after looking at all possibilities for embedded databases including open-source ones.
I spent some time with Walt Brown (from FSL) and Cathy Elliott to fine-tune his presentation. More on that Thursday.
There were several interesting sessions Today:
- SOA Enablement on IBM INformix 4GL, Gagin Maheshwari, IBM
- Building Data Warehouses with Infomrix, Lester Knutsen, Advanced Data Tools
- Hands on lab on end-to-end security with Informix, Ted Wasserman, IBM
- Open Admin Tool for IDS, John Miller III, IBM
- All About IDS CAF, Conection Manager, and Failover, Ron Privett, IBM
- Using Informix in Telecommunications, Kevin Brown, IBM
- Secure and available public finances with IDS continuous availability, Cesar Jiminez, Jalesco Mexico Government
And, of course, demos, discussions and food on the expo floor and in the networking event in the evening.
Once again, another full day. There were Informix sessions on embeddability, virtualization/cloud computing, security, and zero-downtime upgrade. We also heard a great presentation on database tuning from Rick Rabe and Tom Girsch from Hilton Hotels.
Great sessions altogether. Now on to Thursday.
The Informix team is putting a lot of energy behind this conference. The team is also putting together a Customer Advisory Council meeting on June 2nd where there will be discussions on product directions and features prioritization.
For more information on the conference, please see:
The call for speakers is going on until February 13. This is a great opportunity to participate with the EMEA Informix community and get some exposure for yourself and your company. Take advantage of it.
Find out more at the URL mentioned above. Like it says on that site: Register Today!
Wednesday started with an Informix "eat and meet" breakfast followed by nine different Informix sessions spread throughout the day. My favorite session was: "How Hildebrand and IBM bring smart metering to homes across Britain". It was very interesting to see a real-time system where people can see their power consumption and compare it to a pool of similar housed to see how they are doing. The system does not only measure the total consumption at a home but can break it down to specific outlets. For example, some people were able to find out that their energy consumption was greatly impacted by their use of hair straitening devices. Another person could find out that they spent around 250 pounds per year to run their old refrigerator. Buying a new one for 200 pounds made it pay for itself pretty quickly.
Of course, the other presentations were also interesting. They covered areas such as building data warehouse, grid-based replication, Informix in the cloud and more.
An additional 11 sessions were held on Thursday to wrap up the conference.
The one thing that is hard to measure at a conference like this is the value of the interactions with other people. Discussions on different interests and new challenges, and also how Informix has been used. This ties into what I mentioned in this blog on Oct 9. Good ideas come form people interactions. The conference provided a good environment for that. This was a great conference and you can expect interesting things coming out of the Informix lab in the future. I'm sure we'll have a lot to say next time we meet: The International Informix Users Group (IIUG) conference in Overland Park, Kansas, that will be help between May 15 and 18, 2011.
Another day at IOD. A lot is happening
Fro one thing, the energy level from the Informix IBM people, customers and partners is palpable! Everybody is excited about the product and how it is doing.
As far as sessions, I had a problem: I could not attend two sessions at once. Here's a quick summary of one I attended:
HILT is a trousers manufacturer (fashion industry) that focuses on quality. They believe they make the ultimate trousers. To manage their business, they use a specialized software package produced by AVM software (see: www.avmsoftbase.de) that runs on Informix. One thing the software allows them to do is just-in-time provisioning. This means that in a matter of a few days, they can replenish a store anywhere from HILT's factory. HILT likes the fact that IDS is does not require any attention so they can spend their resources running their business instrad of managing a database.
We also had an excellent party for Informix Monday night that was well attended (courtesy of Intel and HP). Lots of good conversations. Working hard...
till next time!
Another day at IOD.
I did not mention Jerry Keesee's roadmap presentation from yesterday. Praising his presentation seems too self serving :-)
Today, I attended a presentation from a company called Finish Line. They manage 700 stores. Their IDS installation includes 5 servers, 8 instances, 28 databases with a total of 2TB of data. They use 4GL for their backend processing and use SOA to provide a single view of the business environment. One of the benefits of using SOA is that they can keep track of all inventory in all stores and, if needed ship merchandise from one store to a customer. This effectively gives them the capability of 700 distribution center.
They keep looking for ways to make their environment better. IDS is the cornerstone of this strategy.
The presentation started with the top 10 things that their one DBA does:10. Yawn9. Surf the web
Another party in the evening... nights are short...
Cows can detect odors up to 5 miles away.
You can learn all sorts of interesting facts at IOD...
I went to a presentation titled: "Taking replication to the terabyte level at Fonterra". They are a dairy company that exports milk to more than 140 countries and territories worldwide. They recently completed a project with the help of the oninit IBM Informix partner.
The bottom line is that they have 20 IDS leaf nodes that feed 1.2GB of data daily to a central IDS node. Since they are planning to keep 7 year of data, that adds up really fast. They currently have only 5 years of data...
Other notable presentations: Frederick Ho's musing on data warehousing and IDS, IDS helping the American Forests non-profit organization in reforestation (our partner Advanced Data Tools has been working with them for years).
Thursday was a busy day for me. I was giving two presentations. One on Java best practice with IDS and another one about the Retail Integration Framework with WebSphere Remote Server. I also had one hour scheduled for "meet the expert".
I also attended a very interesting presentation given by Andreas Weininger from IBM Germany. He told us a project that implemented the MACH 11 features in a banking environment. As you can imagine, failure is not an option in that context. the implementation included a lot of redundancy with a primary, an HDR secondary and six shared disk secondaries. The SDS nodes were used to feed a number of Linux servers used for number crunching. The implementation was a success. It went from concept to production in 3 months!
The conference in winding down. It will be time to go home soon. It's been great but I'm ready.
IOD 2008 is now over. as I mentioned in my blog during the conference, there was a lot of interesting content. And that was just in the Informix track. Many other sessions in other tracks are relevant to Informix since we can use Informix with the other Information management products. Now, if I only had the time to go over all the sessions to see what's interesting...
Overall, this was a great conference to attend for the sessions and for the interactions with customers, partners, and IBMers.
The next MUST ATTEND event for me is the 2009 IIUG Informix Conference, April 26-29 in overland Park, Kansas, USA. By then I am sure we'll have a lot of interesting news to discuss!
It is finally here: The information on demand conference.
It started Saturday with the Informix Customer Advisory Council (CAC) meeting. The CAC is a set of customers that get together with the Informix development team to discuss different aspects of the products and their directions. The meeting was full of interesting information, most presentations included the word "confidential" at the bottom of each slide... Makes sense for presentations that talk about roadmaps and future features.
There was one (non-confidential) thing that I did not know about that I think everyone should know about: The improvements on the Information center.
If you go to the information center at: http://publib.boulder.ibm.com/infocenter/idshelp/v115/index.jsp you will find two new things in the welcome screen:Subscribe to the information center updates (RSS feed)download the new search plugin
The RSS feed gives you a way to know when new information appears in the information center. That is useful since you don't have to go through the entire site to find out. You never know when some new information could give you a solution to your business problems!
The other one is very interesting. You can install a small plugin in your browser so you can search the information center from there without having to go to it. Guess what I did after the meeting... The installation is so quick that I thought nothing happened. Now, in my search box in the upper right corner of my browser, I can do start a search that will give me the results form a search in the information center. By the way, the plugin is available for IDS 11.50, 11.10 and 10.0.
Someone asked me the following question:
"How do I keep passwords in the database so nobody can get them?"
It means that we cannot keep the the passwords in plain text in the database. Informix has a few functions that can be used for encryption: ENCRYPT_AES and ENCRYPT_TDS. It would be easy to create a table and encrypt the column that contains the passwords.
The next statement that came up was: "..but, if someone has the encryption password, he can get all the passwords. We need to protect the passwords from internal access".
This means that we need to use a different password to protect each password in the table. The solution I proposed was to use the password to encrypt itself. Let's look at an example:
CREATE TABLE passwd (
INSERT INTO TABLE passwd VALUES(1, ENCRYPT_AES("Jacques", "Jacques"));
INSERT INTO TABLE passwd VALUES(1, ENCRYPT_AES("Lance", "Lance0"));
INSERT INTO TABLE passwd VALUES(1, ENCRYPT_AES("Daniel", "Daniel"));
INSERT INTO TABLE passwd VALUES(1, ENCRYPT_AES("Umut", "Umut01"));
The values inserted look as follow:
SELECT * FROM passwd
I can now test f someone has the right password for user 1 by using the password value to decrypt itself:
SELECT col1, DECRYPT_CHAR(col2, "Jacques") FROM passwd WHERE col1 = 1;
If I use the improper password, I receive an error:
SELECT col1, DECRYPT_CHAR(col2, "Jacques") FROM passwd WHERE col1 = 3;
26008: The internal decryption function failed
One more thing. Note that the encryption password must be at least six-character long. This is why in the example I padded some encryption passwords. An easy way to work around it would be to always add padding to make sure we meet that minimum size. Keep in mind that the maximum size of an encryption key is 128 bytes.
With this approach, we can keep passwords in the database and keep them secret.
This week we have the International Informix Users' Group annual conference. It is being held at the Overland Park Marriott.
Here's a bit of trivia for you: In 2008, Overland Park was listed as the 9th best place to live in the US. No doubt having the Informix lab close helped its ranking :-). You can check it at:CNNMoney.com
Sunday was a day of tutorials with eight tracks running at once. I arrived in the early afternoon. It is amazing how quickly you can get into interesting conversations, discussing different projects and business solutions.
The evening reception was a success with lots of networking and good food. This is a great start to what should be a very interesting and useful conference. Monday starts with a keynote from Dr. Arvind Krishna and continues with five tracks on a variety of subjects.
I recently ran into the mention of INT8 and, by association, SERIAL8 by Informix engineers and a recent redbook. I want to make a quick comment on that.
These two types were added a long time ago to support the eight-byte integers (64 bits). They are defined as being a 10-byte structure that includes two "standard" integers. It was done this was so eight-byte integers could be supported on 32-bit operating systems. Now it appears that most operating systems support a native 64-bit integer. For this reason, new data types were added to Informix version 11.50 (fixpack 1). The new types, BIGINT and BIGSERIAL, take less space and perform better. Here is what the release notice says:
Improved Query Performance for Large Integers and Serial Data
The BIGINT and BIGSERIAL data types, which are provided as alternatives to the INT8 and SERIAL8 data types. can provide better performance than the INT8 and SERIAL8 data types.
So, let's forget about INT8 and SERIAL8 and let's use BIGINT and BIGSERIAL available in Informix 11.50.
One of the first applications I ever used (after lunar landing) was called Eliza. It was at a time where terminals wrote on paper and 110 baud transmission rate was state of the art. The program would start with the following statement:
I am Eliza. Tell me about your problem.
If you proceeded with a coherent conversation, you'd think you were talking to a real person.
I read, much later, that some people, after starting the dialog with Eliza, asked for privacy to continue their dialog.
Eliza was designed in 1966 (I did a google search on it). The IBM S/360 came out two years before. All that to say that there are a lot of amazing things that people have done with computers since the first computer was invented. Many people have a tendency to discard what has been done in the past in favor of the latest technology.
From time to time, I'll suggest some reading on technology in general. The first on I want to suggest is about programming. It first came out in 1986. The second edition came out in 2000:
Programming Pearls, second Edition
Jon Bentley, ISBN 0-201-65788-0
Hope you’ll enjoy it.
Lately I've had a lot of internal discussions about features, benefits, and qualities of Informix Dynamic Server version 11. Two characteristics tht came up were the fact that IDS can be invisible and that Informix is everywhere. Humm... everywhere and invisible... we could do a lot of jokes about that... but I don't want to be in the doG house :-)
We just closed the second quarter of 2009. A lot happened during that period: we had the Informix conference, the release of IDS 11.50.xC4, Informix Warehouse, and Storage Optimization with deep compression. Of course that does not even include IDS 11 training sessions given around the world, proof of concepts, customer discussions, many upgrades, and multiple production implementations of the latest features.
So, what do we have in store for the second half? I can't really tell you :-(.
There are two things that are pretty obvious that I can mention: IDS 11.50.xC5 will likely come out in the second half and the end of support for IDS 7.31 is September 15. That should not come as a surprise since IDS 7.31 has not been sold since September 2008. It had quite a long life (IDS 7.31. was released in 1999, last millennium!).
I'm hoping that all 7.31 customers are already working on the upgrade (you can upgrade directly from 7.31 to 11.50). Since IDS 11.50 is a superset of 7.31, that's should provide a minimum of inconvenience. Then they will be able to take advantage of all the performance improvements and all the new features that makes IDS even easier to use and manage. I really believe that people that are happy with IDS 7.31 will be impressed with all the improvements in 11.50.
People on IDS 10.0 should start thinking about moving to 11.50.
If you have any questions about going from 7.31, 9.4, or 10.0 to IDS 11, don't hesitate to contact your local IBM Informix expert.
Wikis are part of the technologies included in Web 2.0. They can be very useful for sharing information within an organization. Here at IBM wikis are becoming very popular.
MediaWiki is the PHP-based wiki engine that is used by wikipedia. In fact, it was originally written for Wikipedia. It is available as open-source from http://www.mediawiki.org. The current release is version 1.12.0.
It seems to me that IDS is particularly well suited to support a wiki. It is a high-performance database engine that is scalable and required little administration. Also, with the clustering capabilities of IDS 11, It provides a very scalable platform for a wiki.
Why mention all this? Because I created modules to support IDS as a database engine under MediaWiki. It requires IDS 11 since it also uses the Basic Text Search (BTS) DataBlade to provide searching capabilities. I still consider this "beta" code and it will require additional modifications but it works.
Any interest? Contact me at firstname.lastname@example.org
July 1st was the 10 year anniversary of the IBM acquisition of Informix. Since the acquisition, Informix has releases version 9.3, 9.4, 10.0, 11.10, 11.15, and 11.70. A few days ago, we releases 11.70.xC3. Other recent addition include the Informix Warehouse accelerator that introduce game changing technology for the data warehouse/data mart area. Add to that the Informix-Genero for fast application development and mobile applications.
So much has happened in these 10 years. Go take a look at Informix. Download one of the free edition and give it a try. For people that think they know Informix, go take a look at the large number of improvements we've added to it over the years. Go visit www.ibm.com/software/data/informix and find out more about what is going on. The IBM Information on Demand conference is coming up. This is the best way to learn about the latest capabilities and network with Informix partners, customers, and IBMers. The conference is held in Las Vegas October 23-27.
Now that my major deliverables are done in xC3, I'll be back regularly to talk to you about these major improvements and how many people can take advantage of them.
Ten years with IBM and going strong. There is so much more to come. Stay tuned!
There were 2 keynote addresses, 25 sessions, 5 bird-of-a-feather sessions, usability labs and demos in the exhibit hall. Then the day closed with an Hawaiian Luau sponsored by Gillani. A packed day to say the least.
All this was topped with the announcement of IDS 11.50.xC4 that includes a new storage optimization feature and the Informix warehouse bundle. Things are moving fast with Informix!
The storage optimization includes compression, repack the spaces that is saved and shrink the dbspace to free the space. This new feature could save 30% to 50% or even more in some cases.
The Informix warehouse bundle include a too that allows you to define your warehouse and define the process of extract, load and transform. There is a chat with the lab schedule for Wednesday, April 29th at 8:30 AM Pacific, 10:30 AM Central, 11:30 AM Eastern, 4:30 PM London, 5:30 PM Paris that will provide more information on the subject.
When I was in France, I met two partners/resellers: VMark and Frame. Both partners are strong Informix partners and supporters. It is always good to meet partners of this caliber.
In addition to them, I must give a particular mention for ConsultiX's Khaled Bentebal that went the extra mile and re-started the France's Informix users' group with a meeting on September 30th. The meeting was well attended with over 20 people despite some scheduling issues that greatly reduced the advertising for it. There was a mix of roadmap, positioning and technical presentations that were enthusiastically received by the audience.
This year we have seen several countries starting Informix users groups. the one in France is the latest that shows that Informix is growing and doing well. I wish Khaled and the France Informix users' group all the best.
I mentioned the Informix warehouse in my previous entry. There is the chat with the lab coming up. Here's something more: a new tutorial on DeveloperWorks:
Get started with Informix Warehouse Feature, Part 1: Model your data warehouse using Design Studio
Then there are the informix Warehouse product pages:
I saw some interesting comments related to my blog entries. Hope you are reading them... the main subject is object to relational mapping (OMR). I'll get back to that soon. For now, I want to continue what I was talking about
Some people are really passionate about following a strict approach. This can cause problems with such things as encapsulation that insures that the implementation of the object is opaque. Look at it a little bit as being very strict about following the highest possible normal form. My point is that you have to be careful about not offending people in their approach. Learn about their methodology before jumping into a passionate presentation of your approach: Take them from where they are to where you want them to be slowly, watching for resistance where communication could break down.
Looking back at the employee definition presented in my previous blog entry, note the following: A manager can have multiple employees working for her. This leads to a representation where a manager object includes a collection of employee objects. This also leads to implementation performance problems where because all the objects were instantiated (created) it took a long time to create the object that included the collection of object. The concept of "lazy binding" was implemented to solve this. Basically, the object in a collection is not instantiated until it is accessed.
This is another area where database specialists can start a discussion to improve the overall performance. Now that I've set the premise, I'll cover it in more details next time.
The other day I ran across an article on Infoworld.com: Cloudera pitches Hadoop for everything. Really?”
Of course, the article starts by mentioning the expression about hammers and nails. This is an old story and it appears that it is getting ready to repeat itself. Like it’s been said: “those who forget the past are doomed to repeat it”.
Hadoop has been the biggest star of the big data story. I have to say that it is revolutionizing data processing and for good reasons. Many seem to point to the use of cheap clusters based on commodity hardware. I personally prefer to attribute it to the large amount of data that has different requirements from traditional data processing.
The traditional data processing needs are still there and still growing. Getting rid of “silos” of data has proven extremely difficult. It also relies on getting rid of years of investments and re-writing many proven applications.
Instead of trying to fit everything into Hadoop, it is much better to have an overall strategy that takes into accounts the different needs of different data sets and make sure the overall architecture accommodates exchange of information between all of them.
Cloudera want to become the “enterprise data hub” powered by Hadoop. Like the article mentions, “Hadoop i still seen on all sides as a bucket of parts..”. Maybe it is a bit early to talk about an enterprise data hub based on Hadoop.
Of course, if all you have is a hammer, everything looks like nail
If you've been following my blog over the last few years, you can notice a few things lately:
I have not blogged in a few months
My blog's name has changed
The significant part is really the name change. It went from "Informix and Computing" to "Big data in motion".
Let me first address the Informix part. Yes, I am still involved with Informix activities. In fact, I am currently working on a proof-of-concept for Informix TimeSeries that involves technologies such as Java, kafka, zookeeper, fastjson, messagePack, and more. So, Informix continues to be involved in "Big Data" and its use with other current technologies.
Will I continue to talk about Informix? Probably. It all depends if I believe I have something interesting to say on the subject. As long as I have activities with Informix I have opportunities to find interesting information.
Now. What about "Big data in motion"?
A while back I decided to go back to my old team: Worldwide Technical Sales and Enablement.
My main focus is now on InfoSphere Streams. This has already been an interesting ride. I've worked on multiple projects that include putting together an extensive training session, work on PoCs, writing DeveloperWorks articles, and more. I've even put together a DeveloperWorks wiki that centralizes all sort of resources related to InfoSphere Streams. I called it the InfoSphere Streams Playbook.
InfoSphere Streams is part of an overall "Big Data" architecture. There are many ties between Streams and the BigInsights platform and any other technologies that help getting big data under control. Yes, that includes Informix. It also includes many other technologies.
My focus may be mainly on "in-motion" data but the entire "Big Data" solution stack eventually interacts with it. That explains the new blog title.
As usual, I want to continue "casting a large net" so I can be free to talk about anything I find interesting.
So, drop me line, post comments. Let's continue a dialog that will help everyone (including me) learn new things and continue to have fun with our technological challenges.
Here's a quick entry to let you know that there is a new blob available at:
Virtual worlds are important in gaming but are seriously looked at in business. This could be the next step of the evolution of collaborative software and Informix is in the middle of it!
Take a look
Carlton Doe has distilled his many years of Informix expertise into a new book. It is titled "Administering Informix Dynamic Server". It covers IDS 11.5 and even talks about IDS on MAC OS X.
Of course, you can get this book on-line or, hopefully, from your local book store. I got my copy a week or so ago and I already used it to check on a few things about IDS 11.5. Great time savings! In addition to the easy to read great content, I actually like the size. Despite being over 400 pages, it is less than one inch thick and feels quite light. I think it will establish permanent residence in my briefcase when I travel.
Here's one more for you: The book is available at the bookstore at the IBM Information on Demand conference (Oct 26-31). For any of you who are going to the conference and have not already ordered this book, you can buy it there.
But wait! If you buy or bring your book to the conference, you could get Carlton to autograph it. He will be available on Monday between 3:00pm and 4:00pm in the main event center hall and again on Wednesday from 1:30pm to 2:30pm.
I'm bringing my copy.
Carlton is not done yet. He is working on a sequel to this book: "Administering Informix Dynamic Server, Advanced Topics". it will be more focused on areas like performance tuning, ER, HDR/MACH-11 and other topics. Something to look forward to next year hopefully.
It's hard to believe that we are already at the end of March. Seems like it should still be January. According to my blog, it still is January! I better get on with it!
Informix just came out with version 11.70.xC2. No big deal you may think. Wrong. It is a big deal! With xC2 we are making available a new edition: IBM Informix Ultimate Warehouse edition.
I'll be talking about this at the France Informix Users group next Monday. With this, queries that take hours now could take minutes. Some queries end up perfroming 100 times faster! For more information Look at:
And that's not all. We've been looking at ways to help our 4GL customers modernize their environment for years. We want customers to get more value out of their 4GL code and new application developments. the result: Informix Genero. Find out more at:
Stay tuned. The year is barely starting.
There is a new redbook now available for people that want to get into the use of the TimeSeries feature.
Other resources to help you include:
Let me know what you are doing with Informix TimeSeries!