Picture an IT manager proudly presenting a set of charts to his customers indicating that all is well with the world, only to be blasted by an irate business line manager armed with a completely different, user-based, and wholly contradictory view of system health and stability. Why does this happen? And why does it keep happening?
Old Man Voice: Back in my day, when people still bought CDs, and streams meant running water, an IT system had a purpose in life. It did one or two things right, and you liked it.
In those days, systems, by and large, supported core business processes. The Purchasing System for a global company's Asia-Pacific geography did just as its name implied. If that system was down, it could rightly be said that purchasing for all of AP was down. Everyone knew what was affected, and the impact on the business. As the nature of enterprise computing has evolved over the years, several factors have disrupted this straightforward view of the technical universe:
- ERP systems have proliferated, combining multiple backoffice business processes
- Increases in processing capability and relentless focus on minimizing TCO has engendered massive technical consolidation
- The explosion of the internet gave rise to Business-to-Business (B2B), and then Business-to-Customer (B2C) models of commerce
- Hardware and software virtualization further blurred the boundaries between applications, systems and the business processes dependent upon them
- And now the ubiquitous Cloud obscures the faint lines of demarcation that remained
If you are still managing IT with tools and/or processes hearkening back to the monolithic (and Paleolithic) business environment of yore, of course you are going to have problems. Measuring server-level availability and performance is useful to the teams responsible for directly maintaining those servers, but it is almost irrelevant to a business line manager or the end users whose interests they represent.
Business managers and end users only care about machine level and application level operational metrics to the extent that they directly correlate with end user experience and productivity. To measure and manage what is of importance to the customers of IT service providers, what is really required, is a business services management (BSM) model that integrates a multitude of systems management tools with the support processes and teams needed to operate them. This represents not only a philosophical challenge, but a technical one, as well, since such integration crosses boundaries of technical support teams and system management tools capabilities.
If your process availability management strategy requires you to take each business process and map it out to all of the applications and system infrastructure it traverses, in order to understand its performance, health, capacity, and availability, you are fighting an uphill battle.
Many organizations recognize the limitations of teams and tools in creating a sufficiently detailed, real time mapping of their process and technical interdependencies. For this reason, they attempt to approximate a business services management model through direct instrumentation of a handful of very critical business processes.
Such a case-by-case approach is inadequate, however, when there are a large number of critical processes, and when the supported environments are very heterogeneous. Scaling such a solution becomes impractical due to a combination of expense and implementation time. Even at its best, such an exception-based, limited deployment of comprehensive management of critical process availability represents a missed opportunity for incorporation of the results of the tooling that exists at the systems and applications levels of the solution.
On engagements I've worked in the past, I have used IBM Tivoli Application Dependency Discovery Manager (TADDM), in conjunction with IBM Tivoli Monitoring (ITM), and IBM Tivoli Business Services Manager (TBSM) to create a unified view of the process <> application <>infrastructure relationships for business ecosystems with extremely complex logical and physical architectures. This solution, which we've dubbed Business Process Availability Management (BPAM), enables us to take advantage of existing monitoring capabilities within an enterprise, augment them with application-specific and business process specific monitoring capabilities, and create a scalable, integrated real-time view of availability and performance which is detailed enough for an IT support organization, but relevant enough for a business line manager.
The following figure provides an overview of the Business Process Availability Management architecture.
The IBM Redbooks publication IBM Software for SAP Solutions includes information on Systems Management for large SAP implementations, and it provides a detailed description of the IBM Business Process Availability Management approach.
Derek Jennings is a Senior Certified Consulting IT Specialist with the IBM Global Business Services® division in the USA. Derek is currently an offerings and solutions architect for IBM Dev/Test Cloud Services. Derek has over twenty years of experience in full life-cycle performance engineering for SAP and large, complex enterprise systems. He has also designed the monitoring and management strategy for many of the largest and most mission critical business systems in IBM. Derek is a co-author of the IBM Redbooks publication IBM Software for SAP Solutions.
Likes before 03/04/2016 - 1
Views before 03/04/2016 - 3302