Big Data and Analytics is rapidly emerging field that is engaging an increasing number of our business partners here at the IBM Innovation Center here in Silicon Valley.
This is hardly surprising considering IBM's leading position in this space: http://ibm.co/17nXhrj and the fact that Silicon Valley really is the ground zero for the Big Data movement in the US and the world.
Over the next several weeks and months I'll discuss IBM’s presence in this space from several different perspectives: the IBM products and their functionality, IBM business partners and their solutions in the Big Data space, and finally IBM's view of the Big Data and Analytics ecosystem as a whole.
As you read these blog posts, if you’d like me to cover a specific topic that I haven’t covered, just post a comment. If the amount of interest warrants it, we’ll create a forum outside these blog posts to discuss further.
It may come as a surprise to some, but IBM has been in the analytics space for at least the 35 years that I have been in the business. Back when I was just getting started, IBM was creating the concept of the "Business Information Center" and "Decision Suport Systems", using "4th generation languages" like Focus and Mantis, to produce both ad hoc and canned reports for the business. All this on the mainframe of course.
The next step was to extract data from the operational systems to data files where reporting could be done without interfering with business processing. In essence, these were primitive data warehouses.
How far IBM and the analytics industry have come since then!
From that base we now routinely build systems that apply statistical techniques to drive fact-based, actionable predictions that can save our customers many millions of dollars.
Case in point: I have recently worked as part of a team at an electric power utility on a condition-based maintenance Proof of Concept (POC). The utility gathers performance data from field devices, combines that with preventive maintenance data from the Enterprise Asset Management system (IBM Tivoli Maximo) to predict a failure point using Cognos Enterprise and SPSS. If the predicticed failure point falls before the next preventive maintenance cycle, the unit can be maintained or retired before a failure causes a large scale outage.
The POC is completed with a very satisfied customer, but since only a small fraction of the utility’s field devices have been instrumented, the utility is faced with how to collect the performance data from a huge number of devices. A task that, when completed, will move them squarely into the Big Data space. We’ll talk more about this in future posts.
What I really want to talk about now is the suite of capabilities that focus on adding value to business by finding and qualifying customers. Just as in the utility field devices, it’s only since the advent of the mobile platforms with their social apps that business have had the ability to acquire data on how the individual customer really regards its products. And it is only in the very recent past that businesses have acquired the ability to analyze the data in a meaningful way.
The acquisition of Sterling Commerce http://ibm.co/1aBqbmj, Vivisimo http://ibm.co/1ipbJyA, Tealeaf http://ibm.co/1hndP4h and Algorithmics http://ibm.co/1dij2aT, has provided IBM's Big Data Platform with an unparalelled analytic capability that can be applied to the unstructured Big Data stored by Apache Hadoop.
Which is also part of the IBM Big Data portfolio.
I am currently working with a local cloud-based business intelligence and analytics company on a POC to take data from Salesforce to produce near real-time analytics and trending for a large media company.
We’re just starting. But the future looks bright for this company and others.
Stay tuned. I’ll profile the company and our results in future posts.