At IBM Edge 2016 last week, I connected with IT leaders who are running into the limitations of their existing, aging data infrastructure. In some cases, they have outdated systems that haven’t kept pace with modern data center requirements and are becoming too expensive for them to maintain. In others, they run into intractable limitations in the performance and scalability of their data platforms as they try to add new capabilities that the business requires.
They are at the forefront of healthcare’s transformation in the US, innovating to form an integrated ecosystem of products and services to ensure their clients get the best possible healthcare experience. For example, since individuals can now select their insurance providers and plans alongside the employers and other groups that have typically made these choices in the past, they needed to develop new products designed for individuals as well as for groups. They are also expanding their business in new ways by processing Medicare claims for the government, and by opening a network of health clinics. Their IT practice needed to evolve, and their existing x86-based infrastructure simply was not up to the task. By moving their core systems over to a modern open data platform – MongoDB on IBM Power Systems in this case – they improved their application performance by more than three times while minimizing database costs and server sprawl.
An additional benefit of this migration for Florida Blue was a dramatic improvement in the efficiency of their call centers. Under the old system, clients calling into customer service could be on the phone for about nine minutes on average, spending much of that time on hold as agents pulled up reports. Using MongoDB and predictive analytics to preselect relevant queries, the call center reduced hold times by over 3 minutes. Obviously this has a dramatic effect on customer satisfaction, but it also improves the productivity and efficiency of the entire call center. And these are only initial results — with further development and tuning, their efficiencies will only improve.
IBM is working with a variety of key open source database providers to bring benefits like this to more clients regardless of their requirements – providers like MongoDB (a NoSQL document store), EnterpriseDB (a Postgres database), Neo4j (a NoSQL graph database), and now Hortonworks (a Hadoop distribution). Bringing these databases to Power Systems enables clients to build the enterprise-grade, future-proof data platforms necessary for tomorrow’s modern business applications.
IBM Edge is a wrap for this year, but we’re always on a mission to help organizations like Florida Blue transform. You may have caught up with us at MongoDB World in New York earlier this year, and our support for the open source database community continues with sponsorships of the upcoming Postgres Vision and Graph Connect conferences in San Francisco this October. IBM and EDB are also doing a joint client event immediately following Postgres Vision called Vision Jam, and right now you can receive 20 percent off Postgres Vision when you register for Vision Jam.
To find out more about how modern data platforms can transform your business capabilities, come engage with us at these events, or visit us here to see which database is right for you.
As much as we talk about the power of deep learning, most companies are still warming up to the technology. There’s no denying the promise it offers, but realizing business value can be challenging. We’re about to change that, making it easier for leaders to harness insights with deep learning from the deluge of data. […]
This is an era of data-centric computing. There’s no doubt about that. For those in hardware engineering who embraced disruption over the past few years – my colleagues at IBM Cognitive Systems and our partner NVIDIA included – this sudden rise of AI-inspired applications is a once or twice-in-a-career thrill. From a practical perspective, we […]
Architecting superior high-performance computing (HPC) clusters requires a holistic approach that responds to performance at every level of the deployment. HPC workloads put massive computational demands on your infrastructure: they involve analyzing huge data sets to solve complex problems, and that means they require supercomputing capabilities. In short, there are many facets to the HPC […]