How to put the IBM built-in data scientist to work for you

Share this post:

IBM Operations Analytics dataIn modern IT operations teams, one of the biggest challenges is monitoring an increasingly complex environment—across many different tools—with fewer people. On top of that, teams face more pressure to avoid outages. And due to the immediacy of social media, outages can become very public, very quickly, negatively affecting customer sentiment of the company’s brand.

Some companies are choosing to employ data scientists to help them overcome challenges like these. The data scientist can use machine learning libraries to build a custom solution to help monitor their environment for potential problems.

There’s a better option if you do not want to be in the business of building and maintaining custom tools. You could choose an automated data scientist. It’s a tool that can learn the normal behavior of your time series data to help you avoid service impacting outages. It can also unify your performance monitoring systems into a single pain of glass, discover mathematical relationships to help perform root cause analysis and consolidate multiple anomalies into one problem.

With IBM Operations Analytics, a cognitive data scientist is essentially built into the product. The cognitive data scientist automatically creates and maintains the most suitable data models for your monitoring data. It intercepts analytic output, tests it and only notifies your team of high-confidence anomalies. To help operations teams take action, the technology delivers insights that include forecasts, discovered relationships, correlations and anomaly history.

How does the built-in data scientist help IT operations?

First, the team doesn’t need to focus on how the insights were achieved (no new hires, no new skill-sets, no statistical headaches). They can focus on what they do best: delivering great services, assisted by machine learning. Because the “data scientist” is in the code, actionable insights can be achieved in real-time and at scale. When IT environments change, the IBM technology will simply adapt and learn the “new normal,” avoiding the need to manually adapt data models and thresholds.

Perhaps the biggest bang for your buck is what IBM calls the “performance manager of managers.” Typically, centralized operations teams have between 20 to 40 performance managers, each requiring domain knowledge and configuration settings to create alerts. The IBM technology takes feeds from any performance manager and provides a single solution to dynamically set and maintain thresholds across your entire infrastructure and applications. And because the baselines can be highly seasonal, they are consistently more effective than traditional manual methods. The IBM technology can actually reduce noise while delivering increased efficiency.

The data scientist in practice: Banking

One real-world example comes from the banking industry. One IBM banking client is using   IBM Operations Analytics technology to manage their online banking application. The solution helps them identify performance anomalies which the bank’s operations team uses to take action.

Over a three month period, the team successfully reduced major incidents on the banking application by 85 percent, from 20 to three as measured in a three-month period. Think about the value this team achieved through machine assisted proactive operations:

  • 85 percent fewer interruptions to the online banking service
  • 85 percent fewer chances of revenue loss
  • 85 percent less chance of brand-damaging feedback circulating on social media

Stay tuned for more IBM Operations Analytics insights

In this post I highlighted one of my favorite client value stories and explained how the unique IBM approach can help you achieve similar results without specialised skill sets.

In the next post, Ian Manning, lead developer for IBM Operations Analytics, takes us under the hood. He explains how IBM differs from competitors, and most importantly how scalable proactive operations is enabled through actionable insights on performance data.

In the third post, Kristian Stewart, senior technical staff member for IBM Analytics and Event Management will explain how our approach delivers effectiveness and efficiency gains, at massive scale, through actionable insights from event data.

Finally, to complete the series, Jim Carey, offering manager for Netcool and BSM products will discuss how IBM is meeting the need to shift to DevOps. He’ll demonstrate strong new value for cognitive and agile operations.

Interested in learning more? Check out what’s possible for your business with IBM Operations Analytics.

Offering Manager ITOA - IBM Operations Analytics

Add Comment
No Comments

Leave a Reply

Your email address will not be published.Required fields are marked *

More Cognitive stories

Why cloud AI is a solid bet

At Think 2018 later this month, IBM will be showcasing cloud platforms, AI-based services and how the two work together to help businesses push forward. Forbes contributor Paul Teich recently highlighted why “being a leader in AI is a significant accomplishment for a company that is in its second century.” First is the value of […]

Continue reading

KBC K’Ching chatbot uses IBM Watson for authentic conversational interface

In my last post, I explained how K’Ching, a financial app for young people, has a familiar look and feel to apps they’re already using. Now I’ll share a little about how we developed our open chatbot with Watson Conversation on the IBM Cloud. Building a chatbot from scratch K’Ching is an open chatbot, which […]

Continue reading

Bringing AI and cloud to low-code development: Mendix and IBM Cloud

Technical complexity and other roadblocks shouldn’t prevent organizations from using advanced cloud services. So IBM and Mendix, a leader in low-code platforms, are partnering to make sure they don’t. According to IDC, cloud computing is now a substantial part of new IT spending and is expected to grow from $96.5 billion in 2016 to more than […]

Continue reading