Announced last fall at Information on Demand, IBM's IT Operations Analytics solutions
have their own track here at Pulse 2014. And, in addition to my duties capturing some of the highlights of the opening general sessions (here
), I've been tapped to cover this beat as well.
In his pre-event blog post
, IBM Marketing lead for IT Operations Analytics (#ITOA for those of you following along on Twitter) Paul Kraeger sums up the situation like this:
"IT ops teams have a crazy tough job. A typical IT environment generates MILLIONS of ALARMS A DAY, TERABYTES OF DATA, and ops teams often have SECONDS to respond before service is impacted. [...] At the same time, your users are demanding more from your systems in the form of applications that work faster and are always available, from any device. Every second counts."
I sat in on two sessions today. I can't say I completely understood everything I saw, my background in business analytics did provide enough of a footing to understand why this new solution category is increasingly important to IT. I've summarized the highlights below.
A Bold new Business Technology World Demands IT Analytics
First up was a session with Glenn O'Donnell, Principal Analyst with Forrester Research.
"Analytics help us understand the mess we've created. And boy, have we ever created a mess," he said.
O'Donnell pointed to the recent outage at Gmail, data breaches at Target and the rocky rollout of healthcare.gov as three object lessons in the value of IT Operations Analytics. "Software has evolved to the point where it's superior to our own abilities. We need to make changes quickly, but we need to know they're the right changes so we don't have to go back and re-do our work."
O'Donnell pointed to two main areas where greater insight into network operations can help IT improve performance: speed and quality. In the former, analytics help IT make faster decisions about resource allocations, system changes and provide better feeback to developers. In the latter, they help IT make the right decisions.
"If you're the typical IT organization, you're not that fast and you don't have high quality. I'm sorry, but it's true."
O'Donnell also highlighted the the high costs and other negative outcomes of increasing network complexity:
- Shadow IT systems: Developers are pushing for new services inside the firewall and will look elsewhere if IT can't deliver quickly enough
- A lack of feedback on application performance puts the "Dev" and "Ops" camps at odds.
"Complexity obscures your ability and grows at an exponential rate. Our ability to understand it does not."
O'Donnell then pointed to some of the ways IT Operations Analytics can help IT understand and better manage their complex systems:
- Incident Management: Analytics can enable faster resolution and more confident triage.
- Change Management: Analytics can help IT make more trustworthy decisions about what to change, when and why.
- Automation: Necessary for Industrialized IT. "You need it or things will break."
- Security: Not only to detect threats from outside, but from inside the company as well. "analytics can protect us from ourselves. We can do things better and stop doing dumb things."
- DevOps: Information is the fuel of continuous service delivery - Information needs to flow along and throughout the entire development cycle.
Analytics can also help IT reduce its operational expenses costs, said O'Donnell. Most IT organizations measure their performance on Mean Time To Resolution. But O'Donnell deconstructed that metric into its component parts:
- Mean time to identify
- Mean time to know
- Mean time to fix
- Mean time to verify
O'Donell said the longest and therefore, most expensive stage of this metric is the mean time to know. Most organizations throw "all the kings' horses" at a problem because the root cause isn't readily apparent: "People don't know what they're chasing down." But with analytics, organizations would know instantly who or which function needs to intervene.
"The key is to tackle complexity before it tackles you," said O'Donnell. IT Operations Analytics gives IT that ability.
Predict. Search. Optimize: How IBM is Optimizing Operations
Later in the day, I sat in on a session entitled "Improve Your Cloud/Datacenter Operations with IT Operations Analytics"
Here, IBM's Pratik Gupta explained how IBM's IT Operations Analytics tools can mine this big data to predict and avoid problems, quickly isolate root causes and optimize workloads across virtualized systems, resulting in improved business operations.
Gupta categorized the benefits in three categories: Predict, Search, Optimize.
Predict: Reacting to performance thresholds isn't enough, said Gupta. IT needs to prevent outages before they happen an impact service. Gupta shared an example of a major retail bank.
Using IBM SmartCloud Predictive Insights, was able to identify not only threshold problems across its 80 servers and 40k-plus metrics, but other issues they weren't even aware of and that didn't lend themselves to threshold monitoring, including 10 major incidents in a 10-week period in advance of customer detection. The estimated savings over four weeks totaled $600,000.
Search: Diagnosing service problems in applications and infrastructure involves too much data to analyze manually, said Gupta. Instead, Barclays bank turned to IBM IT Operations Analytics to perform an analysis of its log files of customer transactions.
The result? The system revealed the respective profitability of each channel, leading to better IT decisions about which supporting services and processes merited increased investment. The ability to perform searches in a business context let Barclays connect simple monitoring through to analytics through to workflow, said Gupta.
Optimize: A lack of clarity on resource allocation can lead IT to either drive up costs through over-allocation or risk breaching SLA commitments if systems aren't adequately funded, said Gupta. Organizations continually face the need to optimize server capacity to accomodate growth but to also cut costs. They must balance storage performance across data stores in a "business policy-driven optimization."
As a client example, Gupta pointed to IBM itself, which has seen its storage demands increasing at more than 25 per cent per year. Again, through a combination of IT Operations Analytics, the company was able to reduce the labor for its storage tier optimization from 235 hours to six, saving $13 million in infrastructure costs over three years.
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 4159