Guest post by Claus T. Jensen, Senior Technical Staff Member and Chief Architect for IBM SOA-BPM-EA Technical Strategy
A Service Oriented Architecture (SOA) is characterized by being based on a set of interacting (business) services and processes. Obviously those services and processes need to be managed, but is that all there is to managing a Service Oriented Architecture?
Many would routinely say "Yes" and let it stay at that, but three other important aspects need to be taken into account to enable improved business outcomes:
- What is the set of business capabilities (or products) that need to be enabled?
- What are the changes required to support those business capabilities?
- What are the resources available to implement the changes?
None of these are SOA-specific per se, yet all of them are required to maximize the value derived from your SOA investment. There is plenty of literature on service governance, but precious little on the higher level portfolio management capabilities required to mature a SOA initiative long term.
Furthermore, we often see an initially successful SOA initiative ultimately fail due to not transitioning properly from project mode to portfolio mode; an issue that is exacerbated by the proliferation of devices and channels, and the need to extend beyond the boundaries of the enterprise. This article deals with the portfolio aspects of managing a Service Oriented Architecture.
The purpose of SOA Portfolio Management and 4 key types
Generically, a portfolio is simply a collection of �stuff� with the following characteristics:
- Somebody owns it
- It represents a consistent subset of the system under consideration
- It has associated with it defined value criteria
The purpose of portfolio management is to optimize the collection of �stuff� according to the defined criteria. The type of stuff being considered and the value criteria being applied will be different depending on a particular stakeholder viewpoint. For example, a CEO might ask if we have the right products, considering the portfolio of products offered by the company. A CFO might ask if we are making the right investments, considering the portfolio of investment opportunities. And an architect might ask if we have the correct architectural components, considering the portfolio of enterprise assets.
Most enterprises need the following four key types of portfolio management:
- Product portfolio management: Managing the set of products provided by the enterprise, typically using economically based key performance indicators (KPIs)
- Change portfolio management: Managing the set of potential and ongoing changes of the enterprise, typically using criteria for compliance and net impact or value of change
- Change resource portfolio management: Managing the set of resources that is available for changes, typically using criteria for resource allocation and metering
- Asset portfolio management: Managing the set of enterprise assets, typically using criteria for consistency, configuration management and reuse
All of which need to integrate in a synergistic fashion as illustrated by the following figure:
For example, change portfolio management and resource portfolio management need to come together for effective definition and execution of change projects. Similarly, product portfolio management and asset portfolio management need to come together for effective configuration management, managing the dependencies and relationships between business products and the assets from which they are built.
It is quite natural for an architect or engineer to default to an asset portfolio view. Yet a good SOA architect will understand the need for taking the other three types of portfolio management into account, or even driving them in the case where no other stakeholder seems to take appropriate responsibility.
Change portfolio management
Many different stakeholders in and around the enterprise have both the right and the obligation to propose desirable changes as seen from their viewpoint. This statement holds true for the stakeholders that suggest changes based on operational improvement, for the stakeholders that suggest changes based on the long-term architectural direction of the enterprise, but also for the �management stakeholders,� the �auditor stakeholders,� the �public legislation stakeholders,� and so on. The key to integrated change portfolio management is to properly register, assess, and prioritize all of these potential changes. Optimizing the process of change begins with optimizing the selection of changes to execute.
Resource portfolio management
The resources that are available for change are always finite and never uniform. Assigning proper resources and skills to the desired change initiatives is the next key step to optimize the process of change. Some local control needs to be retained, or the organization will stall in extensive bureaucracy. Having said that, resources must also be available to prioritize for and assign to long-term enterprise-wide initiatives. Finding the proper balance between optimizing the current state (efficiency) and re-engineering the future state (effectiveness) is never easy, yet remains an important part of integrated portfolio management.
Product portfolio management
Finally, when changes are governed and resources properly controlled, an optimized change process can actively manage the portfolio of products that are offered by the enterprise. Organizations can adjust these products based on market trends and projections as well as current internal and external product performance.
Find related information here:
Read past issues of the IBM Smart SOA & BPM Newsletter and subscribe to receive future issues.
Likes before 03/04/2016 - 1
Views before 03/04/2016 - 7261
Guest post from Dr. Michael Zerbs, Vice President, IBM Risk Analytics
This month we at Algorithmics, an IBM Company, released the June issue of TH!NK, our semi-annual magazineexploring the world of financial risk management with an editorial focus on the space between inspiration and implementation.
TH!NK is an award-winning publication written by and for risk professionals, featuring engaging, original content on thought leadership designed to inspire conversations about the challenges of today and the possibilities of tomorrow.
As is most editorial writing on the subject of financial services in recent years, the issue is markedly inspired by the ramifications of turbulence and change, and the possibilities that arise in their wake.
Recent elections in France and Greece have added a new chapter to the ongoing sovereign debt crisis in Europe. Yet following both elections, Chancellor Angela Merkel of Germany clearly stated that neither she nor her government were interested in reopening the eurozone fiscal pact, or the strategy of deficit-cutting austerity measures.
Determining the best response in times of uncertainty has been an issue for financial service firms since the outset of the financial crisis. Regulators, governments and analysts have called for financial firms to change the way they do business.
One way that firms may be able to respond is by looking to how they have managed uncertainty in the past. In �Back to the Future,� the June issue�s cover story revisits capital and its role in the bank of tomorrow. When early banks operated as partnerships with personal liability attached, every decision regarding capitalization and risk profiles was owned by decision makers. The impact of this framework on their business holds interesting implications.
Elsewhere in the issue are other features that explore new approaches to existing challenges. These include a look at interconnectivity and stochastic modeling, risk and social media and the CVA desk�s function of pricing the true cost of risk. In �Through the Looking Glass� we return to the topic of curve fitting, with an empirical look at how chief risk officers and supervisors can gain critical insights into major exposures they would otherwise be unable to obtain.
�Back to the Future� is not only the insightful cover theme of this month�s TH!NK magazine, inspired by turbulence in financial services. It is also emblematic of the possibilities beginning to come to fruition in the wake of change for Algorithmics, as we continue to undertake our transformation as an IBM Company.
Together with IBM OpenPages, Algorithmics has formed IBM Risk Analytics � a segment of IBM Business Analytics dedicated to helping firms transform their business models in order to optimize outcomes through employing risk-aware decision making. I could not be more excited about this new chapter in which the best teams in operational and financial risk management have come together with IBM to build on the successes of their respective pasts and create an even stronger integrated offering for our clients, not only in financial services but across industry sectors shaped by an ever-shifting ecosystem.
IBM Risk Analytics is committed to thought leadership, but more importantly thought leadership that matters to the industries in which we operate. The June issue of TH!NK incorporates our past � in longstanding expertise areas such as CVA, counterparty credit risk, optimization, and curve fitting; and our future � with content inspired by IBM Smarter Analytics, IBM Banking Industry Solutions, and Social Business.
This issueis representative of the transformative changes through which you will see, over the coming months, the blossoming of IBM Risk Analytics as a true force in the risk community. In our early days we have already seen outstanding feedback on the power of our integration � especially of our close and growing relationship with IBM�s Global Business Services organization � from our clients in forums such as ARC 2012 and Vision 2012.
What is the appropriate response in times of uncertainty and conflicting views on future direction? TH!NK posits that it is evolution, but a particular kind of evolution that does not overlook historical wisdoms. I expect that by our next issue in November 2012, TH!NK will demonstrate even further the convergence of the best of Algorithmics, OpenPages and IBM.
To read the issue in full, click here.
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 5012
Good strategies require good information; I think we can all agree on that. If you want to fly to Austin, for instance, it's important that you establish whether you mean the Austin in Texas or the Austin in Minnesota before you buy your plane ticket. Failure to do so will threaten the success of your Austin-visiting strategy at a deep level.
You might also think of this in terms of the military phrase �actionable intelligence.� If the intelligence isn't very good, the action you're contemplating probably isn't very well advised. (You could call that kind of information �actionable stupidity.�)
For many organizations today, however -- especially the larger ones that have been around a while -- ensuring that information is good is far from easy. This comes as a consequence of many factors, including:
- The total volume of information, which is vastly higher today than it's ever been before. Is big data always a resource to be tapped? Or is it sometimes a challenge to be overcome?
- The age of information -- in too many cases, it has long outlived its usefulness, may indeed be flat-out wrong and if it plays a part in strategies, those strategies will likely be compromised.
- The way information can change as it's used in many ways, by many people, to achieve many goals. The game of Chinese Whispers (also called Telephone) illustrates pretty well how easily and thoroughly that can happen.
Information governance can help you fulfill the promise of big data
- The fact that information can occur in multiple versions which differ from each other in subtle or blatant ways. Reconciling these different versions to arrive at a single accurate truth, and eliminating the versions which aren't true, is no simple matter.
Recently I discovered a blog
on these subjects by an IBM expert, Dave Corrigan -- aka, IBM's Director of Product Marketing for InfoSphere -- and was intrigued to find him discussing these various ideas in terms of trust.
It makes perfect sense, of course. If you're building an omelette out of eggs, or a house out of wooden beams, you need to be able to trust that they aren't rotten. And if you're building business strategies, processes and decisions out of basic information, the same logic applies.
This, in short, is the heart of information governance
- maximizing the business value of information by maximizing its quality and trustworthiness in a variety of related and interconnected ways. A quick phone call with Corrigan confirmed this interpretation.
�Information Governance establishes trust in information,� he said. �Without trust, organizations fail to capitalize on new insightsBut when business users can trust information, they act upon insights from analytics and reports, and operate more efficiently when using enterprise applications.�
This struck me as particularly interesting because of the implications. Picture a CIO who, having invested heavily in big data solutions, proceeds to collect piles and piles of data, runs his shiny new analytics tools on the piles and generates lots of impressive-looking reports, only to round-file the reports because, at some basic level, they just don't seem very trustworthy. Or, possibly worse, he uses the reports to make major decisions anyway, despite profound doubts about the wisdom of this course of action.
Talk about an indictment of technology! I asked Corrigan how common that scenario really was.
�More common than you'd think,� he said. �Recent studies
tell us that one in three organizational leaders frequently make decisions based on information they don't trust, or don't have. Half say they don't have access to the information they need to do their jobs. And 60 percent, a clear majority, think they have more data than they can use effectively.�
Information governance is all about solving that problem. The idea is to make data more trustworthy so that you can then proceed confidently to use it in more ways, solve more problems and create more value -- both for yourself and for your clients, customers and business partners.Six pillars of governance to support business goals and strategies
This, of course, is easier said than done. Fortunately, you don't have to do it alone. Corrigan explained to me that as a result of IBM's hundred-year history in business and an endless list of successful customer engagements, IBM has learned a thing or two about how information should be governed for best results -- actually, six things.
�Trusted information, as we see it, is dependent on six key technology aspects,� said Corrigan. �Basically, you need to ensure that information is understood, clean, holistic, current, secure and documented.�
Let's walk through those aspects briefly.Understood
information is information that has a clear, established context. That means its structure, its source and all associated metadata. Information has to be understood in this sense before definitions and policies concerning it can be shared across projects.Correct
information is just that -- correct. It's been standardized and cleansed, is in the right format and is known to be accurate. Logistics companies that ship products, for instance, will need to be quite sure they have the correct shipping address or customer satisfaction is going to take a major hit.Holistic
information is information that's been reconciled across all repositories, so that inaccurate versions of it are removed and a single accurate version is left. The logistics company above may have a correct shipping address on file for a customer, but it will also need to get rid of the other five addresses it also has, in other databases, all of which are completely wrong.Current
information is chronologically accurate. Keeping all information forever, as if it were all perpetually useful, will inevitably create problems. Instead, information should have an expiration date (rather like milk, water filters or members of Congress). This minimizes the odds it will influence decisions in ways it shouldn't.Secure
information has been protected and monitored over its lifecycle to verify only the right people have seen it, changed it or used it in any way. One of the best ways to increase the trustworthiness of information is to keep the wrong people from getting access to it in the first place.
information has a known lineage to establish its history. This is rather similar to the idea of provenance in the art world, used to reflect changing ownership. If you're planning to spend $50 million on a Picasso, you need to be sure it was not in fact painted eight years ago by someone named Steve. Just as with provenance, information lineage can be used to trace problems, guide decisions and yield a better outcome.
All of these capabilities are provided by IBM's InfoSphere famil
y, which includes leading solutions like InfoSphere Information Server, InfoSphere Guardium and InfoSphere Master Data Management.
InfoSphere solutions aren't just standalone tools; they interoperate at a deep level, forming a complete information governance solution. This solution, in turn, helps organizations get the best use out of information even in the most sophisticated cases, where information volumes are incredibly high, use cases are many and it's critical that the information be as trustworthy as possible.
Corrigan sees this interoperable design, in which governance capabilities are logically linked, as fundamentally necessary if major IT initiatives are really going to be successful in a pragmatic sense.
�Common projects that drive the need for integration and governance include newly installed enterprise application, or a data warehouse or big data systems that are the foundation of analytics and reporting,� said Corrigan. �Improving the trustworthiness of information in each of those enterprise projects requires various combinations of the six aspects, through Information Governance technology, to fully satisfy requirements. That's why we see Information Integration and Governance as a common platform of integrated capabilities for data integration, data quality, privacy and security, lifecycle management, and master data management.�Additional InformationFind out more about Information Integration and GovernanceJoin the InfoGov Community and become a governance leaderRead the Forrester report on turning data into business valueGet smarter about smarter analytics at Information On Demand 2012Register now for Information On Demand 2012Listen to this podcast to learn how to manage and leverage information betterAbout the authorGuest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
Likes before 03/04/2016 - 1
Views before 03/04/2016 - 8829
My cousin's wife told me recently that they wanted to buy a house, but weren't sure they could justify such a huge investment in such a doubtful economy.
So I told her this: �Buy a few square feet. Take a few weeks, try them out and see what you think. If you like 'em, buy some more square feet. Then a whole room. Then a whole floor. Eventually, maybe, you'll have your dream house.�
Of course, this was just a joke. But most of the time I think it's actually very good advice, because it's very easy to apply and it applies to so many different circumstances.
It certainly applies to physical fitness, where trying to accomplish too much, too soon will just burn you out, or put you in the hospital, instead of make you fitter. It also applies to marriage; getting engaged on the second date is generally not considered a love-life best practice.
Much the same kind of thinking applies rather naturally in IT. It shows up, for instance, in the form of kernel-based operating systems like Linux and all modern versions of Windows. The kernel represents a solid initial foundation that handles core tasks like memory management, to which any number of logical capabilities can be (and are) added to form the complete OS.
And these same sorts of ideas apply on a far larger scale in the context of cloud computing
, I think. Because organizations can't know with perfect accuracy in advance how best to develop and utilize cloud for their own particular circumstances, it's probably wise for them not to think of and develop clouds as a monolithic entity -- a thing they have to roll out perfectly and completely on day one -- but rather as a foundation to which they can add new capabilities over time.
If I had to guess, in fact, I would say that it was exactly this reasoning that led IBM to give SmartCloud Foundation
that title. It's meant as the initial �cloud kernel� on top of which you can then subsequently add new layers, new capabilities, that match your business requirements, just as Linux developers add Linux services, all of which run on top of the Linux kernel.Why manage the cloud when the cloud can manage itself?
As it happens, I prefer certainty to doubt. So rather than just keep guessing about IBM's nomenclatural logic, I decided to ask an expert: Marco Sebastiani, Product Manager for IBM Service Delivery Manager and Cloud Solutions.
Sebastiani not only confirmed my interpretation, but ran with it in what I thought was a pretty cool direction.
�You can think of cloud management software almost as a set of nested Russian dolls� he said. �Practically any cloud is going to need to be able to do things like create virtual servers, and track key assets, automatically. That basic functionality corresponds to the innermost Russian doll. We address that with SmartCloud Foundation's entry cloud solution, which does provisioning and image lifecycle management. But then, once you have that set up, you can easily add more capabilities over time: bigger dolls. Every larger doll, in turn, leverages the capabilities of the smaller ones. And the cloud intelligently and automatically orchestrates all of its capabilities based on business policies.�
So, to pursue this analogy, what's the next doll up from SmartCloud Foundation?
The answer, it seems, is IBM Service Delivery Manager
-- a set of capabilities, delivered as a pre-integrated software stack, that can help organizations leverage clouds to do even more, and create more value, in areas where they typically really need more value.
�The idea of this solution,� said Sebastiani, �is to simplify, accelerate and automate service fulfillment. It minimizes the amount of manual work IT has to put into the cloud by making the cloud much more self-governing and self-optimizing. So suppose you're an employee who wants a new service in the cloud. Instead of having to submit a request to IT to create that service, employees can just ask the cloud itself to do it. And, by orchestrating key tasks in logical ways, that's just what the cloud will then do. In this way, service management becomes much easier to pursue because services running in the cloud basically manage themselves, cradle-to-grave.�
This fits Sebastiani's analogy rather well, too. Return to the idea of Russian dolls for a minute, remembering that the innermost cloud doll does provisioning and monitoring of virtual servers.
What IBM Service Delivery Manager does, in turn, is build bigger dolls on top of that, automatically leveraging those functions over time, in ways that fulfill business requirements, while also adding entirely new capabilities that add entirely new value.End-to-end optimization of the complete service lifecycle
This solution, for instance, includes an intuitive portal interface available via any standard Web browser. This, in essence, is the front-end needed to create new services that will run in the cloud.
Using it, one can basically instruct the cloud: �This is what I'd like to do, this is when I'm going to need to be able to do it and this is how important it will be to the business.�
Then the cloud basically does the rest -- ensuring that new virtual servers are created and eliminated on time, provisioned using the right server images, and that this entire process doesn't conflict with or compromise existing services unacceptably. (If that sounds like �automatic IT governance� to you, you're pretty close to the mark.)
To do that, of course, the cloud needs to be able to allocate critical resources fluidly and dynamically
-- resources like processing power, memory, storage and even network bandwidth. This capability, too, is provided by Service Delivery Manager. It is continually aware of the available resources, discovers new resources when they are added to the general pool and doles out resources when and where they're needed. Then, when the demand level falls, the cloud pulls those resources back to the pool, or directly assigns them to another service that happens to need them at that point.
Also worth noting is the fact that all of that happens far more quickly and efficiently than it would if it were overseen by human talent. So, because fewer resources are wasted, fewer are needed in the first place -- a major cost-saving opportunity for the organization, which can now get by on less total processing power, memory, storage and bandwidth than it would have thought possible before the cloud.Real-time monitoring
is another major capability. Service Delivery Manager continually tracks the health and performance level of both virtual and physical resources -- a critically important function given how incredibly dynamic a cloud can be. So let us imagine that a given node (physical host) fails due to a toasted logic board; Service Delivery Manager will automatically notice and report that issue, leading to a quick and accurate failover of the associated service to a different, much healthier node.Cost-tracking
is yet another major strength of this solution. Given the intensely shared and interconnected nature of a cloud, where so much is happening automatically, you might expect it'd be difficult figuring out the costs created by different cloud services and systems -- and business teams and projects that use the cloud. And normally you'd be right.
�Service Delivery Manager changes all that,� said Sebastiani. �It gives you granular insight into exactly how costs are trending in all those different ways -- in as much or as little detail as you need. So if you're using your cloud in a public model, it can tell you exactly how much to charge your customers for their particular cloud utilization, even though all customers are using the same hardware. Or if you have a strictly private cloud, it will tell you how much you should charge back to different groups. This way, it creates the kind of insight that over time can help, or encourage, those divisions to try to keep their costs down.�Additional InformationFind out what Cloud and IT Optimization can do for your organizationLearn more about Cloud Service Delivery & ManagementDiscover the benefits of cloud with the cloud simulator gameAbout the authorGuest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 6647
Data is the oil of the 21st century, and analytics is the combustion engine.
Les Rechan, General Manager, IBM Business Analytics said it first in his opening address, �Data is the oil of the 21st century.� In no field is this truer than in risk and financial services. Rechan certainly wasn�t the last to make this powerful statement. Throughout the morning, speaker after speaker explained this new fundamental truth.
The volume of digitaldata in 2011 totaled 1.8 zettabytes, and that number is growing exponentially, explained Sarah Diamond, General Manager of Global Consulting Services for IBM, during her plenary address.
Dr. Michael Zerbs, President of Algorithmics, an IBM Company, and Vice President of IBM Risk Analytics, explained how, according to the IBM IBV / MIT Sloan Management Review Study 2011, 58 percent of organizations said they are using analytics as a competitive advantage. These organizations are 220 percent more likely to outperform their peers. There is no doubt: Analytics-driven organizations outperform. This is the thesis of Smarter Analytics.
In risk management, becoming a true analytics-driven organization requires a focus on the whole enterprise in order to fully optimize outcomes. See how Algorithmics, an IBM Company is helping its Banking, Financial Markets, Insurance and Asset Management clients achieve this in enterprise stress testing here, and in enterprise credit management here.
Regulatory Compliance is insufficient to be successful in Financial Services.
Dr. Laura Kodres, Division Chief for the Global Financial Stability Division in the Monetary and Capital Markets Department of the International Monetary Fund, argued in her plenary address that simply meeting the Liquidity requirements of Basel III is not enough. Systemic liquidity risk was at the heart of the financial crisis, and nothing in Basel III addresses the role played by 'non-banks' in systemic liquidity risk, not to mention the relationships between them. (Dr. Kodres was careful to note that these are her own opinions and not necessarily those of the IMF).
Earlier in the day Dr. Zerbs alluded to the same argument, stating that higher capital requirements alone do not make the financial system more stable. Firms must be cognizant of the �unintended consequences� of regulatory reform. And, in order to not only significantly mitigate a firm�s risk exposure, but further enhance its performance, organizations must transform business models.
Embracing analytics for risk-aware decision making transforms business models.
Gone are the days of risk management in �silos.� It is absolutely essential that financial services organizations embed risk analytics in decision making processes. This requires vertical integration � bringing risk-aware decision making from the back-office to front-office � and a holistic view across risk types, such as market risk, credit risk, liquidity risk and operational risk.
Firms must capture the intrinsic linkages across risks and asset classes, and ensure consistency across business lines in order to succeed. And, risk intelligence must be weaved into the fabric of the business � this is what Dr. Zerbs meant when he discussed brining risk out of the back-office.
If you can optimize outcomes at the point of impact, for example with real-time decision support in the trading desk, you can enable action based on risk insights. That is the essence of Smarter Analytics, and the mission of IBM Risk Analytics for the Financial Services Sector. It is also how IBM Risk Analytics intends to become essential in Financial Services.
Watch all the action.
To access the full suite of coverage from Day One at ARC 2012: Risk 360, register here to watch the replay of the Livestream coverage from our May 8, 2012 plenary sessions.
Join us again tomorrow via Livestream using the same registration link to hear further insights from Algorithmics and IBM at 9:10 am UK. The May 9, 2012 plenary sessions have a particular focus on how Algorithmics and IBM are working together to leverage each other�s strengths in both technology and services.
And, don�t forget to follow @IBMRisk on Twitter and the event at #ARCRisk360 on May 9, 2012 for more coverage by-the-minute, including links to additional resources on our various topics.
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 6288
Modified on by Tim_Powers
Do you really understand the reason you make decisions?
Sometimes the mind wants to unconsciously push us into a certain decision when there�s a better way to think about it.
That�s the premise of the book,Think Twice: Harnessing the Power of Counterintuition,written byMichael Mauboussin, Chief Investment Strategist at Legg Mason Capital Management and a keynote speaker at IBM�s upcomingVision 2012conference in Orlando, May 14-17.
Visionis IBM�s global conference for finance and risk professionals to help improve planning, budgeting and forecasting, identify and mitigate risk, and meet the demanding requirements of XBRL, IFRS, Basel II and Solvency II with greater confidence.
I talked to Mauboussin about his book, making data-driven decisions, some common pitfalls as decision makers, and his upcoming talk atVision.
�What's very exciting is that in the last half dozen years, we've had a real influx of data, and we're now just learning how to tap that data for the benefit of better decision making,� said Mauboussin. �Now we can create a better intersection between value creation and making decisions.�
The problem however, according to Mauboussin, is that we still have the same cognitive makeup and the propensity to make common mistakes.
�We often think about our own decision making as being objective and fact based and rationale. And we tend to underestimate systematically how important the social context is for our decision making,� said Mauboussin.
To illustrate this point he told an interesting story from his book.
Researchers went into the wine section of a supermarket and set up French and German wines next to each other that were roughly matched in price and quality. Over a two week period they alternated playing distinctively French and distinctively German music to see if it would have any influence on purchase decisions.
Surprisingly, they found when French music played people bought French wine 77 percent of the time, and German wine 73 percent of the time when German music played. When asked if music affected their selections, the consumers unanimously said no.
�This basic experiment can be extrapolated to a lot of organizational settings where we think of ourselves as trying to be conscious and mindful as we make decisions. But indeed what is going on around us can be deeply influential to our decisions,� said Mauboussin.
So what do we do?
According to Mauboussin, integrate more data into quality decisions. However, there is still a tension between the intuitive, go by the seat of the pants experience group versus the analytically-minded group.
�Either extreme is not going to work but a blend between the two is right way,� said Mauboussin.
He then explained about the �expert squeeze� that affects decision makers. On one side are computers and algorithms that are doing jobs quicker, more accurately and cost effectively. On the other side are problems that are complex with high degrees of freedom with lots of possible outcomes.
For instance, there are certain types of tasks that people can learn, internalize and then intuition will work really well, such as chess or hitting a baseball. But, when someone steps into the domains of complexity with numerous outcomes, all bets are off.
This is whyDecision Managementsolutions are reaching a tipping point. By combining predictive models, business rules, scoring and optimization techniques, organizations can generate recommended �next best actions� for each individual customer, citizen, constituent or employee.
�The idea of running every statistic through a persistent and predictive framework can be very helpful in tightening up what organizations do in measuring their own performance,� said Mauboussin. �This doesn't mean experts are going away all together. Humans still need to think about the strategic issues and use the data to inform their decisions.�
And, we haven�t even discussed the role luck and skill play in decision making. You�ll just have to go toVisionto hear more, or read Mauboussin�s upcoming book due out in November 2012,The Success Equation: Untangling Skill and Luck in Business, Sports, and Investing.
However, Mauboussin does offer a few more additional pieces of advice for organizations:
�Focus on the process. While outcomes are what matter, the key is that the proper approach to process is executed faithfully.
�Be very careful of the lessons you learn from history; examine past successes, but also the failures.
�Establish a culture of analytics. Organizations that don't are going to be at a market disadvantage because it's an important source of value creation.
�Ensure that non-financial performance measures are linked to company strategy and ultimate value creation.
�Commit to continual learning. Being able to understand big ideas from various disciplines and cultures can be extraordinarily helpful in problem solving.
For more information:
�Register herefor the upcoming Vision 2012 conference
�Downloadthe Vision conference guide for background on keynotes, elective sessions, demos & workshops
�Read a previous blog poston minimizing risk and improving performance
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 5666
Tell me if this sounds familiar: You're pondering whether to do something potentially risky -- perhaps quit a job, switch to a completely different career path or even start a business. You have many motives to do so, yet the road ahead seems very unclear, and you're uncomfortable with that. And someone else says, �Oh, go for it. Everything in life is risky. You could get hit by a bus any day... but that doesn't stop you from leaving the house.�
Well, that�s true, of course, but as an argument it has a really basic problem: it's number-free.
Not all risks, in other words, are the same. The risk of getting hit by a bus is different from, and much smaller than, the risk of starting a business, watching as it slowly fails and getting into deep debt.
Making such a decision reasonably competently means finding a way to clarify, quantify and prioritize the kinds of risks you're facing in a given strategy -- and weighing them against the benefit you're trying to create.
This, in essence, is a problem confronted every day by businesses making complex decisions. They'd like to create improvements or pursue new goals in a given area. But in a perfect world, they'd also like to avoid getting hit by a bus.
By no coincidence, this is also a major focus of IBM's considerable interest in advanced business analytics -- recently described by Mike Rhodin, Vice President of IBM Solutions Group, as �the silver thread woven throughout our portfolio.� Risk assessment and mitigation are central to business strategies -- almost all strategies, in almost all industries. And advanced analytics can deliver some of the best available insight to accomplish that.Get a moment of clarity -- actually, get lots of them
Toward getting a little more clarity about this area, I talked to John Kelly, Worldwide Market Segment Manager for IBM's Business Analytics group about IBM's perspective in this area... and how that perspective is going to be explored at the forthcoming Vision 2012
conference to be held from May 14-17 at the JW Marriott Grande Lakes in Orlando.
Like me, Kelly sees analytics as a powerful visualization tool -- a way to understand different possible futures, and steer your organization into a future that offers more benefit and lower risk.
�Customers are looking to improve decision making and business performance through increased insight and business intelligence,� he said. �That's exactly why IBM has recently labeled analytics as one of our four major strategic directions -- we know how much potential this area really has. And we'd like our clients to realize as much of that potential as possible.�
Risk assessment and mitigation, of course, have a long history in some areas (like finance) and are less well understood and established in other areas (like technology startups), but the root appeal remains the same in every case. If you want to get the best possible outcome, you need to establish the most likely, and most potentially devastating, pitfalls.
Analytics tools can work almost like a car's high-beams, helping you navigate and get where you're trying to go more safely. That's a goal that almost any business leader, in any industry, at any organization of any size, can understand and appreciate.Regulatory compliance stands out as a growing challenge
And beyond that general value proposition, IBM is making considerable strides in applying analytics effectively in areas that are of particular concern to its clients. One such area: regulatory compliance and policy management.
In the wake of major scandals dating back more than a decade, these regulations have increasingly been created with the stated goal of minimizing various forms of unacceptable risk to the public, to business employees and customers as well as to stockholders. And that, of course, is a laudable goal.
But complying with those regulations can be a headache even for the best-intentioned organizations that are really committed to compliance and dedicating tremendous resources to the job. Even when compliance seems to have been achieved, it hasn't always been. New regulations appear every year; it's not the easiest thing in the world to know which apply in a given case, and under what conditions, and what the best organizational response should be.
IBM, it seems, can help. �Our solutions deliver analysis and reporting, to provide visibility into the state of risk in the enterprise including evidence of compliance or remediation status, trending and point-in-time analysis and ad hoc querying,� said Kelly.
Consider what that means in practical terms. Not only can you understand much more clearly, quickly and easily the extent to which your organization is in compliance, but you can also demonstrate that compliance on demand, in whatever level of detail is required. In the event of an audit, such a demonstration will be essential -- and avoiding potentially hefty penalties and fees will be much simpler. What organization wouldn't be interested in solutions like that?
One solution family drawn from IBM's analytics portfolio is particularly strong in the area of compliance and risk: IBM OpenPages
. This suite of tools focuses specifically on governance, risk and compliance, not just identifying and monitoring risk, but also putting in place a programmatic way to communicate and manage risk exposure across the enterprise to reduce unexpected losses, penalty and fines (not to mention reputational damage), while at the same time improving decision making.
Its compliance capabilities, for instance, are directly on point. Organizations routinely create (and enforce) policies to drive compliance... but not always in as governed and coherent a fashion as they might. (Banking industry, I'm looking at you when I say that.)OpenPages Policy and Compliance Management
automates the lifecycle of compliance policies from cradle to grave, reducing redundancy and optimizing the policies you keep in a way that spans resources, business groups, projects and workflow. Organizations that have a formal implementation of risk mitigation, but would like to tune or enhance it to better align with their current and future needs (not to mention future regulations), will find this solution particularly compelling.A better outcome can result from risk-aware decision making
Risk management is increasingly becoming a strategic, executive-sponsored solution that many organizations view as providing a competitive advantage where risk and performance are aligned and where governance, risk and compliance is part of �annual strategic planning�.
An integrated governance, risk and compliance program also has a wealth of information that can be leveraged for risk-aware decisions. Through business intelligence and reporting, information from an integrated program is being utilized beyond the risk and compliance office and being leveraged by business managers to make risk-informed decisions about resource and investment allocations in product planning.
Optimize your risk management strategies in many dimensions
Other OpenPages solutions -- which inter-operate with each other, via a shared foundation of data -- are available to deliver similar capabilities in related fields like:
- Operational risk management. This offering can identify, manage, monitor and analyze operational risks of all types, all from a single point of command to spur a particularly agile response. From better, more accurate insight comes a faster and more comprehensive remediation.
- Financial controls management: Regulations like Sarbanes-Oxley in the United States are mirrored by similar regulations in other countries around the world -- and for global organizations, each crossed border represents a new set of financial regulations with which to comply. This solution focuses on reporting, offering a centralized architecture for analysis, documentation and data management.
- IT governance. IT has become central to almost everything organizations do today. As a result, risk assessment for IT assets, services and data is needed to ensure that IT delivers the intended value -- ideally, on time and under budget -- even in the case of complex projects that take years to complete.
- Internal audit management. For large organizations that proactively conduct audits of their own, this solution is a natural fit. Using it, they can automate many of the basic processes involved, as well as connect the results logically to other risk assessment initiatives they have in place.
Anyone interested in getting more information on these and related topics should definitely consider attending the previously mentioned Vision 2012 conference.
This is the premier global conference for finance and risk professionals, and the most high-profile stage for IBM to discuss everything it has to offer in this rapidly evolving, increasingly hot area.
When I asked Kelly to sum up in a nutshell what IBM will be discussing at Vision 2012, he said this:
�IBM Risk Analytics enables the Smarter Analytics approach -- turning risk information into insight, and insight into better business outcomes.�
I like the sound of that.Additional InformationLearn how Business Analytics improves business performanceSee what Vision 2012 offers for finance and risk management professionals
Gain relevant business insight through Smarter AnalyticsSmarter Analytics for the financial industryAbout the authorGuest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 7234
IT professionals -- and I say this with compassion, having been one myself -- tend to think way too much about the T, and not nearly enough about the I.
What do I mean by that? I mean that while technology certainly drives business services, it is not, ultimately, the most valuable player on the IT team. Information -- data -- is.
Data suggests new strategies, quantifies their success or failure, and informs virtually every operational decision (whether it's made by a person or a processor). It's probably not going too far to say that, in a large sense, the fundamental mission of IT is get the best possible use from data throughout its lifecycle.
And while structured data, like core databases, usually gets most of the time, energy and money; it's unstructured data that comprises some 80 percent of the total in a typical enterprise
. This is not the tip of the iceberg, but the hidden bulk of it.
Think of all those Word files, presentation decks, spreadsheets, and PDFs. Think about case notes written up hastily during a phone call; they may never make their way into a database, yet can contain incredibly powerful information. Think of the sum total of data created daily in internal communities, forums, wikis and other collaborative social platforms -- an area that's certainly hot and getting hotter by the day.
Is the enterprise really getting, as I put it earlier, the best possible use from that data?
The answer is almost certainly no, and the consequence is almost certainly diminished agility, creativity, innovation and responsiveness -- all key for the enterprise to succeed.
This is the heart of the argument for Enterprise Content Management (ECM) solutions. By acknowledging the crucial importance of unstructured data, and leveraging it for as much value as possible, organizations can put themselves in a much stronger, more informed, more competitive position going forward.ECM solutions must evolve with the changing times
Not all ECM solutions are created equal, though. And not all ECM solution providers have the depth of insight, or provide the mature capabilities, that the enterprise will need for best results.
I recently had a chat with Craig Rhinehart, Director of ECM Strategy and Market Development for IBM, (check out Craig�s ECM blog
) and he agreed on that point, calling out that IBM has been developing leading ECM solutions for nearly 30 years and first published research on the topic in 1957, over 50 years ago. That�s longer than most IT professionals have even been alive.
And as enterprise infrastructures, content types, strategies and goals continue to evolve, he told me, IBM Software is continuing to evolve its ECM capability and portfolio in parallel, keeping close pace with the changing times.
�Actually, ECM has never been more relevant than it is today,� said Rhinehart. �These solutions can drive value in an organization's most valuable processes. Think of insurance claims, for instance, they're really the make-or-break center of everything an insurance organization does. And claims processing typically revolves around many forms of unstructured data in the context of case management. All driven from the need to deliver better service to their customers in a highly competitive market. So our ECM solutions are a perfect match.�
That's a value proposition that's becoming more and more applicable over time, too. As unstructured content continues to expand in volume, and diversify in nature, major challenges for enterprises emerge in managing it all -- challenges that will often demand a new approach to ECM.Five great ways to squeeze more value out of your unstructured data
�These challenges really come down to five different areas where we're seeing customers have problems,� explained Rhinehart. �It's within them that content management gets applied and customers are seeing value.�
One such challenge is document imaging and capture
-- basically, grabbing data from non-digital sources, like faxes or snail-mail, then sharing it and managing it in all the ways that digital solutions do best.
This is the sort of thing that can really generate tremendous value if it's done right. I once worked at a state government office where a team of more than 50 lawyers was chartered with responding to all snail-mail questions in two days or less -- no matter how complicated those inquiries might be. Given a turnaround time like that, efficient imaging and capture tools were critical to getting the job done, both right and on time.
And that's just scratching the surface, according to Rhinehart. �There's a global logistics company
using IBM ECM production imaging technology to process 600,000 pages per day,� he said. �They expect to process 4 million per day when the rollout is completed. And already, they move shipments across borders with 30 percent fewer resources than before. Really, any company has too much paper -- it's a great opportunity for enterprises to reduce cost and risk.�Social content management
is another area where ECM capabilities can pay off in a major way -- partly because most of this content is extremely unstructured by nature. Collaborative platforms have typically been developed with a focus on empowering user communication, and rightly so, but it's important that all their content still be connected effectively to the organization's repository of record.
�It's the Wild West right now,� said Rhinehart. �If customers don't have a social content strategy today, they need to get one pretty soon. And we at IBM are certainly investing in that area. We think of it as a sea change in business and we plan to continue to lead the way.�Information lifecycle governance
is a third area where ECM solutions can play a hand. Here, the focus falls on how information is managed throughout its lifecycle, in accordance with its business needs and other variables such as regulatory and legal obligations.
For instance, by identifying information of lower priority, then moving that to storage infrastructure of similarly lower cost -- migrating it from, say, disk arrays to tape or optical media -- organizations can preserve what they need, yet drive down the associated operational overhead. It also becomes possible to identify what isn�t needed at all, eliminating it from the complete information infrastructure and freeing up much needed storage resources in the process. Rhinehart adds that �our solutions help our customers dispose of information in a defensible manner. You can�t just hit the delete key�
ECM solutions can add value by automating and optimizing those processes that are content centric. This is Advanced Case Management (ACM)
. According to Rhinehart, �ACM helps by addressing the ad-hoc, exception-oriented business processes where collaboration is key and where getting the right decision made is the desired outcome. Traditional BPM solutions aren�t the right approach for these processes. You wouldn�t want to use a shovel to drive in a nail. ACM enables a more dynamic solution development process avoiding many of the issues that make rolling out new applications a lot slower, harder and costlier than it should be.�
Some organizations may describe ACM solutions as dispute management, customer service resolution, care coordination, interventions or even claims processing. These cases are not a typical straight-through process. They involve invoices, contracts and other forms of enterprise content and tend to be customer centric. We have a major retailing chain that's doing this and they're now saving US$2.1 million a year in their call center on labor savings alone
Finally, content analytics
can provide some of the most interesting, and potentially explosive, possibilities for unstructured data in the enterprise today. Just as traditional analytics tools focus on database-driven content, ECM analytics capabilities focus on unstructured content -- surfing through it for patterns or trends, that (once implemented as strategies) can create new business value.
Rhinehart seems particularly impressed with the strides IBM has taken in this area in recent years, as exemplified by the success of the Watson project -- best known for having defeated Jeopardy champions in head-to-head, real-time competition.
�Watson uses IBM Content Analytics technology that is commercially available today for natural language processing. It�s being used to leverage and exploit enterprise content by understanding business insights currently trapped in content. Content Analytics is being used to detect fraud, solve crimes, improve healthcare research, find new business opportunities, understand the voice of the customer and more. Think Business Intelligence for content.�
I share his appreciation on both Content Analytics and Watson. Watson not only comprehends natural language queries, but also leverages many different analytics algorithms, running in parallel, to arrive at answers deemed likely to be accurate. This is well beyond the scope of ECM, or even enterprise IT as a whole, as it exists today.
�When you can pose questions to a computer in natural language, that's just a whole new ballgame --that�s something IT has never even tried to do before,� said Rhinehart. �I've heard it said that every computer before Watson is nothing but a big calculator. And I think there's a lot of truth in that.�Additional InformationLearn more about Enterprise Content ManagementCheck out Craig Rhinehart�s blogCheck out the Enterprise Content Management blogGain insight into the ECM Forum at Information On Demand 2011
About the author
Guest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 12916