Share this post:
When I think of artificial intelligence (AI), I cannot help but think of the science fiction films that I grew up watching and the fictional AI computers they featured: HAL in “2001: A Space Odyssey” or Skynet in “The Terminator.” In those films people interacted with computers that could understand natural language and make decisions.
Today, in the real world, these fictional AIs have been surpassed by Watson and others – and thankfully are a lot less menacing. The art of the possible has progressed faster than my childhood self could have dreamed.
And yet, in IT operations, a lot of companies are still monitoring their environment in a traditional way. They are using static thresholds to alert them when something is anomalous. Some monitoring teams are actually still relying on customer complaints to make them aware of problems.
To help companies monitor their environments more proactively and efficiently, we created Predictive Insights. Predictive Insights helps diminish the need for manual, time-consuming effort so teams can be alerted to the most significant problems first, and become more efficient.
You cannot gain efficiency by simply replacing one manual effort with another. That’s why we made Predictive Insights to be both configurationless and time series data-agnostic. This way teams do not have to expend any additional effort with tuning or configuring the system. Configurationless means that the software can learn automatically without human intervention. Time series data-agnostic means that it can take any time series data, such as key performance indicators, metrics, or measurements of something over time and show value. It can do this regardless whether the data came from IBM or a third-party source.
The second reason we do not require configuration is because it does not make sense for a machine learning-driven product to ask questions that it could better answer itself. For example, I have seen competitor products that require someone to select and configure the algorithm required to evaluate a metric. At best a data scientist could take an educated guess, and at worst create a random alarm generator. The answer depends on the data itself.
Predictive Insights is different. It has multiple algorithms assess the data, determine the algorithms that are best suited for each metric and attempt to build mathematical models that describe their normal behavior. The mathematical models must then pass a validation phase to ensure that they are accurate and that they do not overfit or underfit the data. If the mathematical model is not what the team needs, it will be sent back to the data for relearning. This validation step can occur many times, and models that pass will be used for anomaly detection. The best part is that this happens automatically, without disrupting the environment.
Predictive Insights can evaluate millions of models typically in less than one minute. It will perform three types of relationship discovery: correlations, granger causalities, and metrics that are frequently anomalous at the same time. Any of these algorithms alone requires trillions of calculations. And yet Predictive Insights will learn with them automatically, every day, on commodity hardware.
What once only existed in Hollywood’s imagination is now a reality with Predictive Insights. To learn more, register for our webinar on the value predictive insight brings to IT Operations.
Interested in how APIs can drive business insights for IT operations teams? Check out the first post in our IBM Operations Analytics series. And stay tuned for additional key learnings from our colleagues in coming weeks.