Here's a straightforward proposition: Software is more and more critical to the success of business strategies. So it's getting more and more critical to develop that software properly in the first place. Sounds simple enough, right? Just hire good engineers who don't write spaghetti code and who play well with others. Problem solved.
Well, okay, that actually works pretty well for a software startup. At a tiny, new-to-the-world organization, you've got a brand new kitchen to cook in and a very small number of cooks. Project management almost takes care of itself -- the two-topping pizzas zip out of the oven on time and under budget. They taste pretty good, too.
At the enterprise level, however, software engineering can easily go a bit wonky. Ponder if you will the following variables:
- The total size of a codebase � FYI: measured in billions of lines of code
- The number of functional units to optimize and test
- The number of programmers on a project
- The extent to which applications and services rely on each other to work
- The number of years (or decades) in which a particular codebase has gradually and imperfectly evolved
Scale these variables up far enough and you may find you've gone from a simple pizza, perfectly executed, to something else: a monstrous, 50-course, semi-French cataclysm of a meal that nobody ordered, that smells funky and that, if put in front of diners, will be hurled violently back into the kitchen and cost the restaurant its cherished good name.
Well, I can see I've worked my cooking analogy far past its reasonable life expectancy. However, having made my point, I can get to the heart of the matter, which is this:
For the largest organizations and software engineering projects, today's integrated development environments (IDE) are much more than just tools. The IDE is the individual practitioner�s working environment, which is seamlessly integrated to the team wide capabilities. IDEs are collaborative partners -- mentors, even -- that help guide development teams, projects, applications, services and codebases down the road to successful application lifecycle management
and enterprise modernization
.Given a robust, thoughtfully designed IDE, the best practices almost implement themselves
What with Rational Developer for System z version 8.5 < http://www-01.ibm.com/software/rational/products/developer/systemz/ > hitting the streets this week, now seemed like a good time to discuss these and related issues with an expert.
That expert was Richard S. Szulewski, IBM Product Manager for that very offering. Szulewski put matters on an etymological footing that wouldn't have occurred to me.
�Just look at the term IDE,� he said. �IDE: Integrated (that is, you have seamless access to all the facilities you need to do your job), Development (development is far more than just changing the code), Environment (a place from which to not just do your job, but do it effectively and efficiently). That is a lot more than just a pretty editor. That is what Rational Developer for System z offers.�
And in Version 8.5, it offers a more complete and well-rounded rendition of that concept than ever before. The new solution has been designed specifically to help organizations not just get more value from the mainframe, and from their developers, but also get it at a higher level of abstraction -- from development projects themselves.
Consider, for instance, how it addresses the common concern of scalability -- not of the software being developed, but of the project of developing that software. To optimize large-scale project management, as everyone knows, best practices are required, but not everyone actually implements them. A really mature, thoughtfully developed IDE should make that implementation a lot easier.
Szulewski agrees. �Rational Developer for System z V8.5 includes enhancements that ease potential large-team effects as the number of people on development teams using it goes up. The idea is that any given user can access the host as if he or she were the only one using it.�
For instance, consider the way the solution now automatically keeps programmer workstations up to date. Admins can simply upload new configuration files to the System z; once a programmer logs in, if the new file is needed, it'll be downloaded immediately.
That means more cross-team consistency with less effort -- a best practice by anybody's definition. It also means each programmer can spend more time on coding challenges and less on environment maintenance, which in turn leads to more productivity.
Another example of scalability, this one addressing codebase size: programmers can now more easily search for, zero in on and open the specific code modules they want.
In much the same way a Google search provides a preview of the text at a given link, so that you can decide whether to click it, the new Rational Developer for System z generates a code preview. Just mouse over a module, and you can see the first few lines of its code -- it's as simple as that.Write, visualize and test code quickly, easily... and in a way that isn't at all like French cuisine
Enhanced productivity, especially via editor refinements, is another major design strength of Rational Developer for System z V8.5. In the world of software development, editors are holy ground -- such deep investments, in fact, that they compare with religion and politics as reliable argument starters.
Well, the new Rational offering actually includes three different editors, for LPEX, COBOL and PL/l. And strengths that had been limited to the COBOL editor in the past have now been stirred into the LPEX and PL/l editors, bringing them up to par.
While they differ in specific features, what the new editors have in common is the strategic goal of helping developers visually and intuitively understand and navigate the flow of code much more easily. By increasing the time developers stay in editing context, instead of having to wander elsewhere to do various tasks, the new editors also increase the developer's focus on the job at hand.
And the way the three editors have been brought into rough equivalence turns out to be an instance of a larger theme in the new release. �Rational Developer for System z V8.5,� said Szulewski, �includes a conscious effort to get to better language equity in terms of the PL/I and COBOL languages.�
New integrations are another strength. Since organizations often already have fairly well-developed, specific solutions and information repositories that address particular areas, such integration is a great way to leverage those resources more easily and fully -- eliminating the need to reinvent the wheel.
Organizations that already have Endevor, for instance -- a mainframe code management tool -- will find that the new Rational offering can directly display Endevor elements or packages in a tidy, sortable, customizable table.
Code coverage, too, has been improved, making it a much more straightforward matter to visualize how complete (or incomplete) software testing has been at any given point. Straight from the coverage report, it's now possible to launch a view of the source code to see colored annotations that reflect specific testing.
Code review rules have also gotten a tweak for the better, thanks to additional COBOL and PL/l rules and templates in Rational Developer V8.5; you can even now create custom rules using an easy, wizard-driven process. It all illustrates just how serious IBM is about helping organizations pursue best practices through the new IDE.
�Creating an objective means for confirming best practice adherence -- that is what the new code review capability is about,� said Szulewski. �We've made it easier and faster to define what the 'coding practices' you want should look like, and provided an objective way for the individual developer and whole development teams to compare their work against those practices.�
And if unit testing is your particular cup of tea, you'll probably be glad to hear that in Version 8.5, Rational Developer for System z provides an automated unit testing framework, zUnit, which is similar in nature and concept to JUnit for Java and provides similar benefits. Here, too, smart wizards are available to generate COBOL and/or PL/l test cases.
After these test cases are built and run, the execution results can easily be displayed along with traceback information needed to isolate specific issues -- ultimately, helping to bring the software that much closer to a release version that won't remind anybody of French cooking gone horrifyingly wrong.Additional Information
Discover the benefits of Enterprise ModernizationSee what IBM offers for Application Lifecycle ManagementGet up to speed on IBM Rational Developer Version 8.5Watch videos about the features of Rational Developer for System zTry first-hand the new IBM Enterprise Modernization Sandbox, with no installGet more education with IBM COBOL and Rational Developer for System z - Distance LearningVisit the video library of IBM Enterprise Modernization Solutions for System zAbout the authorGuest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 10066
In a previous blog entry I said that one of the surest roads to business success lies in understanding who customers are, what they want and how best to deliver that. But what happens when customers don't know what they want? This is a bit more awkward; now the organization has to help the customer figure that out. A pizzeria can make that happen with a menu... but most businesses don't have it quite so easy.
Netflix tackled this type of challenge via its famous $1 million Netflix Prize. In 2009, the prize was awarded < http://www.netflixprize.com/community/viewtopic.php?id=1537 > to a group who came up with an algorithm that could accurately predict what kinds of movies Netflix customers would enjoy most. It could do this, in fact, more accurately than Netflix's own algorithm, generating results that were more than 10 percent better. That's pretty impressive given the incredible diversity in taste from one Netflix customer to the next.
Modern IT vendors, whose customers' needs and goals vary just about as widely, have an even more difficult puzzle to solve. Typically, large IT infrastructures at established companies have evolved over time via a process that was more about Making Things Happen Now, and less about a long-term, governed plan of IT optimization.
The upshot is that today, IT workloads are often executed in a way the customer can easily see isn't very efficient or cost-efficient. What isn't quite as clear is how to move to a superior arrangement.
This, I think, explains the growing popularity of self-assessment tools in the IT world. Such tools, offered over the web, give organizations immediate insight into not just their needs, but also the available solutions -- often in a surprisingly accurate way, following a Q&A process.
These tools offer, in a limited sense, free consulting. And if implemented well, they can significantly shorten the path any given organization has to take toward creating a better, more optimized IT infrastructure.Platforms are just tools -- be sure you've got the right tool for the job
So given this context, it was a pleasure talking to Penny Hill, a marketing manager with IBM Software Group who recently helped develop two such tools
Hill reminded me that IBM's focus these days is less on the details of a given platform than on the business value it creates over time. She also suggested that this is an area of �low-hanging fruit,� where organizations can often make rapid headway because they've barely gotten started.
�It's crazy,� she told me, �that organizations continue to argue over the merits of a platform vs. looking at the workload characteristics that are best suited for the right platform.�
That strikes me as a really good point. In the time I spent in IT, platform choice was often taken for granted in advance for all workloads -- relatively low-end x86 boxes running Windows or Linux being by far the most common platform.
Then, based on that assumption, subsequent questions were asked: �How can we accomplish such-and-such on our platform?�
The concept that different workloads have different characteristics, require different resources and are better-suited or worse-suited to different platforms was really never taken into account. So the eventual business outcome was rarely as good as it might have been.
Distributed architectures aren't always the rule, either. At institutions like banks, mainframe computing has often held sway as the dominant platform largely because, well... it held sway in the past, going back half a century in some cases. But organizations should look at their current platform as well as others to make workload decisions
What Hill has recently worked on for IBM are two different tools that give organizations a new perspective on this whole area. If you consider distributed architectures and mainframe architectures as the two fundamental approaches, the next logical questions are: What kinds of workloads are best suited to each? And what kinds of variables should an organization consider to match platforms with workloads in every case?
Hill suggests that this switch in perspective -- from platform-prioritized to workload-prioritized -- has a natural analogy in a familiar area.
�Choosing the best-fit platform should be like buying a car,� she said. �You typically look at the qualities you're looking for, i.e., good gas mileage, safety, Sirius radio, and then search for the car that meets these needs. What you don't do is pick the car first, and then try to force-fit in these characteristics.�A tailored white paper of your very own
This is why both of the IBM assessment tools put the focus directly on workload characteristics -- albeit in very different ways.
The first tool, believe it or not, actually generates a customized white paper. Following a short series of questions on mainframe ownership, workload type, number of users and the relative importance of efficiency, reliability, scalability, security and utilization, this white paper
can be downloaded straight to your hard drive in Word format.
Additional questions might appear depending on your answers to the above. For instance, if your workload involves data warehousing, you'll also be asked the total volume of data in terabytes.
While the white paper is generated based on predefined content created by IBM partner IDC, the content is nevertheless chosen based on your answers, and combined in a way that will more closely reflect your particular IT context than any other white paper you are likely to find.
And as a result, it should provide unusually specific insight into the probable challenges that apply, and provide helpful recommendations concerning the pros and cons of different platforms and workload migration strategies.Interactive assessment: It's what all the cool companies are doing
The second assessment tool
offers an interactive experience based on your answers to three different sections.
The first section lets you define up to five different named workloads; for each, you'll need to provide both the workload's task (analytics, transaction processing, etc.) and current platform (whether distributed or mainframe).
The focus in the second section is on the characteristics of those workloads. For each, you'll need to specify eight different traits -- staff skill level, software license costs, capacity and so forth.
In the third section, you describe the characteristics of your current data center. Here, too, there are eight traits to consider, ranging from floor space to hardware maintenance costs to storage and energy costs.
Once you've finished your self-assessment, the tool then provides results for all your workloads. You can actually see whether a distributed model or a mainframe model is likely to yield optimal performance in each case, based on your specified criteria, via a color-coded model. And if you'd like to adjust your previous answers, to see if the results change, you can do that, too.
I found it interesting, entering different combinations to see what kind of results I'd get. Based on the sample sets I gave the tool, it appears my imaginary companies have invested too much in distributed architectures -- not too surprising, really, given the widespread canard that distributed computing is intrinsically less expensive. Quite often, due to hilariously low utilization levels and frighteningly high energy costs, it's the other way around.
Hill endorses both tools as a way not just to assess your current situation, but also plan for future scenarios. Since the tool lets you enter any values you please, you can test not just the values that apply right now, but those you expect to apply in the foreseeable future.
The results might surprise you -- in a good way.
�Looking at the right-fit platform strategy is often a major mind-shift in the IT world,� said Hill. �But once embraced, it opens the doors to major cost reductions and a smarter, more optimized data center architecture -- put simply, smarter computing.�Additional InformationTry out these workload assessment tools for yourselfLearn more about Enterprise Modernization
Find out how you can experience smarter computing todayAbout the authorGuest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 6703
One of the key topics at IBM Impact 2012
to be held in Las Vegas April 29�May 4 will be IBM PureSystems
. It�s a new family of what IBM calls expert integrated systems that combines the flexibility of general purpose systems, the elasticity of cloud and the simplicity of an appliance tuned to the workload. And I think that the cloud and workload aspects are key ones here.
I had the chance to talk with Jerry Cuomo, IBM Fellow, VP and WebSphere CTO -- and one of the key presenters on PureSystems at Impact -- about the recent announcement and what it will mean to the world of business and IT. Its impact, if you will. But before I share Jerry�s insights, I�d like to step back and talk about cloud in a more general way � then we�ll see how PureSystems fits in.
I sometimes think one of the most important and underrated aspects of cloud computing is �abstraction� -- the way clouds can empower organizations to move up from a lower level of abstract thought and execution to a higher, better one.
Of course, abstraction is a little... abstract itself, as subjects go. So let me trot out one of my patented analogies to clarify a bit.
Have you ever seen a baby when it's first learning to walk? The job is really quite a complex one as far as the baby is concerned. It has to ponder large muscle groups very consciously, deliberately thinking about using one leg, then another, all while also using small muscle groups to maintain its balance.
But eventually the baby can stop thinking about things on that level -- the level of specific muscle control -- and start thinking on a higher, more abstract, more effective level.
Now it's not �I need to move my left leg forward, and put my weight on my left foot� but, much more simply, �I want to walk into the next room.�
This new, higher level of abstraction the baby has reached gives it new power to pursue its goals (which may or may not include terrorizing the family pet and deep-searching local trash cans).
And if this baby is ultimately going to reach the highest level of competitive motion -- perhaps becoming a world-class sprinter, the next Usain Bolt -- it is going to have to be thinking on a very high level of abstraction indeed. There is just no time to think about such details as which muscles you'll move next, when you're running sprints in the Olympics. There is instead only nine and a half seconds to travel a hundred meters.
That's not a bad metaphor for business today -- a similarly competitive world, in which market agility tends to translate into market success. You don't want to have to think about the technical details; you really may not have the time.
You want to focus on your goals and strategies and services, the heart of the value you're creating in the world, and trust that your infrastructure will be up to the efficient execution of whatever you have in mind.
Clouds -- done right -- can be that infrastructure.The question isn't �What's our tech?� but �How well do we fulfill our workloads?�
All this crossed my mind when I learned about PureSystems and talked with Jerry Cuomo. He agreed with me about the importance of abstraction, but was quick to point out that the new launch delivers far more benefits than just that.
It seems that PureSystems is the end result of IBM's underlying goal to deliver a next-generation service delivery platform solution that fulfills workloads optimally -- even given how dynamically workloads can change across time, both technical and business domains and organizations.
�PureSystems is unique to our industry,� he said. �It represents a bold balance of being open yet prescriptive, and preserving compatibility with your current applications while introducing support for highly efficient new workloads. PureSystems do not just hold the potential to be workload-aware; they are workload-aware. PureSystems do not merely enable workloads; they contain them, including a scalable web workload. They facilitate lifecycle management like monitoring and license management, and what's more, those capabilities work right out of the box. Simply put, IBM PureSystems are not just your cloud-in-a-box solution, they are your workload-aware cloud
What are the ingredients of the PureSystems� recipe? Basically, they're packaged in two groups. The first group � �next-generation platforms,� or NGP -- is a top-caliber variation on Infrastructure-as-a-Service.
But it's in the second group, which focuses on application systems, that the real magic happens.
Recall that IBM, almost uniquely to the IT industry, produces solutions at every layer of the technology stack. That means IBM, almost uniquely to the IT industry, also has the power to combine those layers into optimized packages -- all of which also benefit from IBM's enormous experience consulting with organizations of all sizes, in all industries, on cloud computing topics.
For PureSystems application systems, that means IBM's strengths are multiplied, each helping all the others.
�Today, organizations have choices at every level -- processors, storage, network, OS, middleware and applications,� said Cuomo. �While the last decade of open competition around these components has driven record capability and quality, enterprises trade the ability to mix and match these best-of-breed parts while also paying the very high price tag of labor cost and skills needed to orchestrate the final composition. However, this leaves very little in the enterprise's innovation budget. PureSystems give the customer back their innovation budget. Our hardware and software experts have used our cumulative experience to create an integrated system that also empowers our clients to stir in their own expertise and capabilities -- easily.�
Here you see just what IBM means by �expert integrated systems.� It's not just IBM's expertise that's being integrated; it's also the customer's. This is the magic of PureSystems: it is an ideal foundation for private cloud computing that(a)
delivers the best technologies IBM has to offer, drawn from the industry's strongest cloud portfolio,(b)
combines those technologies in the best ways for a private cloud, in direct support of proven best practices, and(c)
still allows the new cloud to be easily tweaked to create a perfect fit for any given organization's needs.Instant time to value, but also straight-forward tailoring
In fact, beyond merely �allowing� that kind of tweaking, IBM has made it remarkably straightforward.
For instance, cloud services executing on PureSystems can be managed by team members both inside and outside of IT proper.
Line of business managers are going to enjoy being able to request a new service right from a catalog, then have oversight of that service themselves -- an experience they may never have had before, and a power akin to being able to walk, instead of having to ask someone else to carry you.
They're also going to enjoy the fact that cloud management for PureSystems can easily be aligned with job roles, so they can manage their services using the interface that works best for them, as determined by the performance metrics that they deem most significant.
IBM has, in fact, created a new admin paradigm just for PureSystems -- another variation on the theme of multiple levels of abstraction -- and Cuomo is very optimistic about how it's likely to be received.
�One of the aspects PureSystems we think our customers will love is the way they make management so straightforward,� he said. �Via our approach of progressive disclosure, they can administer services at the technical level that makes the best sense for them. Specifically, we support a progression with three levels of disclosure. The first, Virtual Application, only requires you to know the needs of your application -- middleware and hardware are hidden. The second, Virtual Systems, pre-arranges middleware in patterns designed to power specific workloads. Last, Virtual Appliance supports a bring-your-own-expertise model, allowing you to include your own middleware and construct your own patterns.�
This concept of workload patterns is yet another selling point of PureSystems. Thanks to literally decades of experience in IT consulting, IBM has acquired an extraordinary level of knowledge about middleware/hardware combinations and the patterns that tend to apply. That insight is baked in, so you can leverage the patterns right away. And most organizations will do exactly that.
But you can also, as Cuomo suggested, create and roll out new patterns from scratch. And you can combine these two models -- integrating, in a sense, the best of IBM's expertise and the best of your own.
It's hard to get much more expert or integrated than that, and Impact 2012 will be the place to learn more about it.Additional InformationLearn what IBM PureSystems are all aboutFind out more about Impact 2012Register now for Impact 2012About the authorGuest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
Likes before 03/04/2016 - 1
Views before 03/04/2016 - 10826
One of the first things you learn working in IT is how difficult it is to get people to switch from one vendor or IT solution to another. Perhaps you start a new job, at a new company, where they're struggling with a technical problem you've solved in the past. Does your new employer want your opinion on the problem?
As a general rule, it does not. The IT group there is already used to technology X, used in manner Y, and it will turn a skeptical eye on any other approach. You could even call this organization �solution-blinkered� -- its eye is covered by skepticism.
Here's another example. In December 2000, I published an essay on Salon.com
suggesting that Apple should pursue a specific, technically complex strategy -- a strategy which was perceived as crazy at that time -- in order to rescue itself from market oblivion and become far more successful.
Six years later, Apple pursued the same crazy strategy I had suggested.
Why did it take six years? Because, although my ideas were correct, and although Apple is known for innovation, decision makers inside the company were skeptical of creative possibilities, and wary of the risks that can come from change.
Most organizations are like that. Often, there is simply no good reason for IT to carry on with a problematic status quo, and every reason for IT to pursue something else that looks a great deal more promising.Want better ROI from IT? Get a better database software.
I ran into the same issue recently discussing enterprise database solutions with Conor O'Mahony, Program Director for Database Software with IBM Software Group.
In this area -- enterprise-class databases � while IBM led the way on mainframe systems, Oracle was one of the first organizations to bring a solution to market on distributed systems. Since then, Oracle has continued to lead the database market on distributed systems. But how much of that leadership is due to Oracle's early mover advantage, and how much is due to its actual capabilities, value proposition and competitive strength?
That seems to me to be a very open question. It has repeatedly been my personal experience, as a former IT guy, that Oracle Database is about as well known for high costs as high performance. And if the Oracle Database performance has declined relative to competition, their costs have not.
That's a real problem, given how deeply rooted database software tends to be in enterprise IT infrastructures, and the staggering impact they have on both IT service levels and IT budgets.
O'Mahony sees things in much the same way. �If IT organizations are looking to identify ways to meet their 'do more with less' mandate, reclaiming some of the IT budget set aside for data management has to be on their radar,� he said. �Data management costs are often a sizeable chunk of an IT budget; and recent advances in database migration technology are allowing them to significantly reduce those data management costs.�
But while competitive options may be superior, organizations often remain blind to those options (i.e., they're solution-blinkered). They have the false idea that switching from one database to another will cost too much, take too long and ultimately create too much risk.
According to O'Mahony, they couldn't be more wrong -- particularly when it comes to the specific case of Oracle Database vs. IBM's own DB2 database solution. Why? Partly because IBM has made it so easy for them to switch.
�Since 2009, DB2 has been adding language-compatibity features,� he said. �Specifically, DB2 directly supports the most popular aspects of Oracle's PL/SQL language. That means applications written in Oracle�s PL/SQL will run natively in DB2 as well -- typically requiring changes of only 2 percent of the code. It also means that even after a migration has finished, organizations can continue to program in PL/SQL if they want. So any programming talent they've hired in that area can carry on programming just like before.�
How does that magic happen? It seems that DB2's capabilities in this area don't stem from any type of emulation (which often runs into compatibility and performance issues).
Instead, they stem from a compatibility layer that really does deliver native performance. Calls made in PL/SQL continue to work just as they did before; they just don't need Oracle technology to do it.
So, to put it simply, you can just pack up your data and applications, move them from Oracle Database to DB2 and they'll run as fast as they did before -- or faster.Lower bills. Higher performance. The end.
And if you do hop from Oracle Database to DB2, don't be surprised when your operational costs fall like a cow dropped from a helicopter.
This is because Oracle Database is, by any reasonable standard, a pricey solution to support over time -- one that typically requires ongoing �help� from Oracle and thus generates excessive annual fees. O'Mahony suggests that this is an area where organizations can really see major positive change right away.
�Instead of spending lots of money on expensive Oracle support and maintenance contracts, more and more organizations are discovering that DB2 is a comparable product that offers far better value when it comes to costs, performance, storage optimization, and staffing levels,� he said. �In fact, some organizations are using this tactic to lower their data management costs by as much as 50 percent, and reclaiming this valuable IT budget for new high-impact initiatives.�
Spend less. Get more. That sounds like the kind of smarter solution organizations always say they want, yet are sometimes oddly reluctant to pursue.
And that's really too bad, because more forward-looking organizations that have already made this leap are already raking in the business benefits: higher performance, lower costs, and all via a nearly painless migration process that often takes next to no time.
�Gone are the days of high-risk IT projects that often missed deadlines and overran budgets,� said O'Mahony. �Organizations are now migrating from Oracle Database to DB2 in literally days. For instance, one of the world's largest banks recently moved a core application from Oracle Database to DB2 in just two days. It was able to do this because 99.5 percent of its Oracle PL/SQL code was supported by DB2 out-of-the-box. And this two-day period included data movement, all code modifications, testing and performance tuning. Such short and low-risk database migrations are literally redefining many organizations' tolerance for database migrations.�
Would you like another example? Ponder the experience of Reliance Life Insurance, one of India's largest insurers and the third-largest private company in India across all industries.
Reliance wasn't satisfied with the performance it was getting from its legacy Oracle infrastructure. Specifically, it took 36-40 hours to process OLTP (online transaction processing) data. This, in turn, meant that the company faced an unacceptable time lag; they needed key information to be accurate and accessible in real time, but the Oracle infrastructure simply couldn't deliver that. And Reliance had no confidence in that changing any time soon.
For these reasons, Reliance migrated to an IBM solution
: DB2 running on IBM Power Systems.
The results? They're now getting the real-time insight they require, because the lag of 36-40 hours they had been getting from Oracle Database has dropped to less than 30 minutes. Customer service is much better informed; customer satisfaction has climbed; and so has application uptime -- 95 percent with IBM vs. only 80 percent with the previous Oracle Database infrastructure. Scalability has also improved dramatically, from 3,000 simultaneous users to 12,000.
Perhaps most impressive of all is the fact that all of these benefits come packaged with far lower ongoing costs. To wit: about 50 percent less total cost of ownership for DB2 running on IBM Power Systems compared to Oracle Database running on Oracle-owned Sun systems.
So let's sum up the case for DB2 over Oracle Database:1. Pain-free migration
. DB2 directly supports Oracle Database applications and Oracle's language -- up to 98 percent direct compatibility. 12. Superior performance
. If you migrate to IBM Power Systems as well as to DB2, you will get a substantial hike in service levels -- in a typical case, as much as three times faster execution. 23. Lower costs over time
. While Reliance experienced an impressive 50 percent drop in TCO, IBM studies suggest many organizations can expect even better -- often, about a 60 percent drop. 3
Tell me: Is your organization solution-blinkered?Additional InformationLearn about IBM Data Management capabilities to better leverage your dataFind out how migrating to DB2 can boost performance and cut costsMeet IBM DB2 10 and IBM InfoSphere Warehouse 10Get this eBook on strategies for lowering the costs of data managementJoin in the conversation on IBM database software newsAbout the authorGuest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
1. �Based on internal tests and reported client experience from 28 Sep 2011 to 07 Mar 2012� and also at: The facts really matter
2. The facts really matter
3. The facts really matter
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 10975
Endpoint management is like a headache looking for an aspirin. Recently I asked my friend Perry -- an IT manager at a Very Big Company -- what endpoint management was like where he works.�Cat-herding,� he said. �But don't you have some sort of endpoint management products?� I asked.
�We use a combo of third-party stuff and the stuff that comes with the OS.�
�And? Don't they help?�
�Well,� he said after a pause, �they make the cat-herding more advanced...�
Turned out that in Perry's case, the endpoint management strategy, though it does a certain amount of herding, also adds to the number of cats.
Consider his rough estimates:
- 24,000 user desktops and laptops
- �Low thousands� of virtual and physical servers -- the number changes every day
- Four fundamentally different operating systems (Windows, Mac OS X, UNIX and Linux -- all in different flavors)
Worse, his endpoint management solution isn't really centralized. It requires quite a few new servers (to handle all the endpoint management) and quite a few agents (a different one for each task like security, anti-malware, software distribution, asset management) deployed on all those endpoints. Pulling all of that together to get things done is cumbersome.
Actually, he didn't say �cumbersome.� I can't print what he did say.Mobile devices are changing the game -- is your endpoint management solution up to the challenge?
Things are getting more complicated, too. With the instant popularity of mobile devices like smartphones and tablets, the number and diversity of endpoints have rapidly scaled up.
That means more operating systems, more agents, more security wrinkles and more compliance challenges to consider -- not to mention the host of human-interest issues that apply to personally owned endpoints.
I asked Perry what his answer was to all of that.
�Same as it was five years ago,� he said. �Be thankful I don't have to do endpoint management stuff any more.�
Well, I couldn't resist telling him about the IBM Endpoint Manager
family, which applies neatly to a typical situation like Perry's:
- One agent for a wide range of capabilities
- One server, capable of handling up to a quarter-million endpoints (almost 10 times as many as Perry's organization has)
- One interface to use in gathering and analyzing endpoint information, as well as carrying out endpoint tasks
You might wonder how that one server is up to the job. The answer: high agent IQ. The Endpoint Manager agent actually leverages the endpoint's own resources -- not the server's -- to handle most of the load of tasks like rolling out new apps, installing security updates, changing firewall settings, tracking the number of licensed copies of software and so on.
And yet it only requires 2 percent or less of endpoint resources, so users don't even notice the agent doing anything. So all those endpoints are no longer cats to be herded; they are instead, a de facto grid architecture that distributes computational tasks evenly and handles them transparently. Pretty slick, no?
All of that came as news to Perry.
What came as news to me, recently, is that the same product family will soon work for those mobile endpoints I mentioned earlier, like smartphones and iPads.Soon-to-be-released IBM Endpoint Manager for Mobile Devices supports four major mobile platforms
With the advent of IBM Endpoint Manager for Mobile Devices
, IBM is tackling one of the biggest shifts in endpoint management in years: the fact that people increasingly want to use (and do use) their own personal devices to handle work stuff.
�We're living in a mobile world,� said Kimber Spradlin, Product Marketing, IBM Endpoint Manager family. �Organizations are going to have to find ways to manage mobile devices, too, not just traditional endpoints like servers and laptops and desktops. And IBM Endpoint Manager for Mobile Devices really makes that job easy because it builds on our current platform, so you get the functionality you need, not the complexity you don't.�
Specifically, it handles devices based on four mobile platforms: Windows, Apple's iOS, Symbian and Android. And because those platforms handle security and management tasks in different ways, Endpoint Manager for Mobile Devices supports both agent and 'agentless' control mechanisms. This way, a single management solution can continue to address all endpoints -- even though some of them don't allow agent installation at all.
�Apple's iOS doesn't,� said Spradlin. �But Apple does provide a management API. So this can be used to handle certain tasks, like partially wiping work e-mails, or calendar data, if the organization needs to be protected from exposure. Android, on the other hand, does allow an agent, so we simply ported our current agent to that platform. In every case, the idea is just to provide the management functionality, and security controls, to whatever extent that it's possible.�
Security does seem like a significant issue; mobile endpoints, by nature, move from point A to point B much more often. And if your smartphone disappears on a vacation, you probably don't want outsiders being able to go through the phone, reading company mail and accessing company resources. That's true whether you're the employee who lost the phone, an IT manager who works with that employee or an exec with a focus on minimizing business risk.
For employees who might be concerned about the sensitivity of personal data, an important point is this: the IBM offering protects you, too.
Suppose your missing phone is loaded with family photos that show your kids, your street address, your pricey new car and other things you'd rather a phone-stealing criminal not be aware of. You can simply request that your phone be data-wiped or access the self-service portal if your company implements that option. And presto, it will be.Create an in-house app store for extra value
Also interesting: Endpoint Manager for Mobile Devices allows organizations to create an enterprise app store. This way, they can offer specific new capabilities for mobile devices in a way that -- just like the security controls -- is of direct benefit to employees.
For instance, organizations might be able to get a significant discount on third-party apps by buying licenses in bulk, and then passing on the discount to employees. �Reduced rate� is a popular phrase when it comes to software purchases.
And, of course, there's a security angle to consider here as well. Employees can download apps from the enterprise app store in confidence that they've already been exhaustively scanned for malware, and are endorsed by the organization as trustworthy. That's not always the case for new apps -- and as mobile device popularity continues to skyrocket, the odds of security-problematic apps go up every year.
Similar value stems from apps that are developed internally. Imagine an organization has a unified asset management solution. Imagine that solution is used in vastly different ways by dozens of different operational groups.
In such a case, the organization might create feature-limited, task-focused apps that target exactly what those groups need to do. These apps could then be offered via the app store for easy downloading and installation to any supported mobile device.
This story gets even more appealing when you consider that, over time, as new versions are released, the older versions installed on endpoints would normally go out of date. That could translate into all sorts of unwanted ramifications, from less-than-ideal performance or stability all the way up to something a lot more catastrophic, like a serious security shortcoming that leads to a breach of company services.
�What you're talking about is endpoint lifecycle management,� said Spradlin. �That's one of the areas where IBM Endpoint Manager shines. For mobile devices using apps like that, it's great to be able to push out new versions -- knowing in advance which endpoints need them and skipping the others. Now, the device owner still has to approve the installation, so it's not completely automatic... but then on the other hand, that user probably wants to know when new apps are being installed, right? So there's a nice balance between the organization's need for risk management and productivity, versus the user's need to be aware of what's on the device and what it does.�
Interested in learning more? Sign up for the beta
and be sure to attend Pulse 2012
in Las Vegas, where mobile endpoint management
will be a major theme providing you with a lot more specific information about this offering, slated for a March release date!Additional InformationSign up for the IBM Endpoint Manager for Mobile Devices betaExplore the Mobility and Endpoint Management stream at Pulse 2012Register for Pulse 2012 todayDiscover how IBM Mobile Enterprise can help you improve productivity, grow market share, drive innovation and enable a social enterpriseAbout the authorGuest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 11137
Ever get the sense that marketing jargon is getting out of hand? Consider this sentence: �Siloed management must give way to a new paradigm of holistic business value.�
�New paradigm,� in particular, seems a little doubtful. I learned a long time ago not to talk about paradigms, especially in the context of shifting. But I think the rest of it just needs a little rephrasing.
Let's try this: �IT teams and technologies should collaborate more to work better.�
That's not so bad, is it? It's easy to find an example, too: security and storage management.
These two seemingly separate IT domains turn out to be flip-sides of the same coin: data protection. And a coin is probably a good metaphor here, because data is often the most valuable asset an organization has.
Imagine your organization. Now imagine how productive your organization would be without any data. See what I mean?
Security and storage management are your vigilant friends with specialized military training who hang around your data and keep it from being threatened, damaged, mutilated, spied upon, lost, kidnapped or murdered in cold blood. And to get that done, they work best as a collaborative team.Encryption delivers powerful protection for almost any form of data
To pursue this idea in a little more detail, consider the most traditional form of backup media: magnetic tape.
It's inexpensive, commonplace and even today, in extremely widespread usage. And it's also a gigantic potential security hole, because the stuff that gets backed up onto it is quite often the stuff organizations want to protect the most. So it typifies the natural link between security and storage, and underscores the fact that organizations should think about connecting these domains a lot more naturally.
Anne Lescher, Product Marketing Manager with IBM Security Solutions, agreed with me on this point when I talked to her.
�Critical data protection should utilize encryption, along with key management, in the event that identity and access controls can be bypassed or storage media is removed or stolen,� she said. �Everyone's worst fear is that their tapes might fall off the truck in transit or be stolen for malicious use.�
Absolutely. Encrypting data everywhere you reasonably can, including backup tapes, leads to better security and a better business outcome.
So solutions that optimally manage encryption keys, like IBM Tivoli Key Lifecycle Manager
, are already pretty compelling and getting more compelling by the day. They help organizations serve and manage those keys in a centralized way, as long as the keys are in use, and directly integrate with tape drives (from both IBM and third parties) to encrypt data as it's stored on tape.
So if a tape, as Lescher puts it, falls off a truck, it's useless to anybody who finds it because all the data on it is already encrypted. That data is much better protected because this organization's security and backup capabilities have now collaborated to work better.
Scale that idea up to the level of production servers and it gets even stronger. Enterprise infrastructures, of course, are chock-full of critical business data kept on disk arrays. Can the same IBM solution help protect that data as well, in basically the same way?
It certainly can. And because you're using the same solution to do multiple jobs, you avoid making things overly complex as well -- a common enemy of progress in the world of IT.
Another point: encryption can also help organizations more easily comply with government regulations (example: HIPAA) concerning sensitive data (example: patient health records). That�s more important than ever, given the way compliance failures increasingly lead to stringent fines -- not to mention negative publicity and serious brand damage -- if data is exposed and customers are affected.
�Effective data protection can be complex to the point of seeming like rocket science,� said Lescher. �The complexity of encryption technology can scare storage and security administrators away from using effective protection controls. So simple, integrated security is essential for both peace of mind and critical data protection.�Data protection means never having to say �it's gone forever�
Of course, backup tapes are just one element of storage. You can make essentially the same case for storage management in a larger sense. Generally speaking, you want to be able to protect data as comprehensively as you can, everywhere you can, while introducing as little new complexity as you can to get it all done.
Talking to Rich Vining, IBM Tivoli Storage Marketing Manager, drove that point home for me.
�When someone says data protection, do you think of backup and recovery, or encryption and access control?� he asked. �Because they're both directly relevant and they both need to be addressed. Are you confident that during your next data disaster, the right person with the right training will log into the right system, restore the right data to the right place, do it quickly enough to limit any losses and not break anything else? If you've deployed a number of different point solutions from different vendors to address the complex needs of your business, the answer is probably no.'
This scenario illustrates data protection from a fundamentally different angle -- the idea that even without malicious attacks or inadvertent backup tape losses, an organization can put its own data at higher risk through problematic storage management. It can slow down backup and recovery processes, skip data that should never be skipped and ultimately lose critical data.
That prospect is enough to give business leaders the heebie-jeebies.
It also underscores the charm of backup solutions like IBM Tivoli Storage Manager
that centrally and comprehensively back up, archive and restore all enterprise data, everywhere it exists, quickly and cost-effectively.
�I like to think of data protection as being comparable to health insurance,� said Vining. �When something goes wrong, whether it be the flu, an accident or something much more serious, you better have good insurance to keep from ruining your financial as well as your physical well-being. Same thing with data protection -- its value comes into play when something goes wrong, avoiding the huge costs of lost data and business downtime.�
It's an interesting parallel, and a timely one given the nation's current interest in healthcare reform and the various ways we might go about it.
In healthcare reform, the fundamental problem reformers would like to address is escalating costs, i.e., insurance premiums that climb every year. A direct parallel to that situation exists in the world of data protection, in the form of escalating data volumes, which similarly grow every year. Data is also increasingly scattered -- distributed over more endpoints and servers than ever before, and in more ways. Conventional backup solutions and strategies often no longer suffice to handle it all, and even if they have the capability, they often don't have the time.
That means more and more data goes unprotected every year. And that's just not acceptable given how critical data is to business operations and strategies. What's the fix?
Vining's answer: Smarter backup solutions, like Tivoli Storage Manager.
�One of the biggest, if not the biggest, cause of data growth is performing full backups every week, which most data protection products force you to do,� he said. �That's because of needless redundancy. Your full backups probably contain more than 90 percent of the same data you backed up last week, and the week before and so on. Why not avoid creating all that duplicate data by only performing incremental backups -- forever?�
Indeed, why not?Get your thumb on the pulse of data protection
If you'd like to find out more about these subjects, think hard about attending Pulse 2012
You'll get a chance, via technical demos exploring real-world scenarios, to see how security and storage management can work hand-in-hand to protect your data -- everywhere it lives throughout your infrastructure -- and direct specific questions to solution and business process experts from around the world.Additional InformationRegister for Pulse 2012Share your viewpoint on the Tivoli Storage BlogWatch a video podcast with Rich Vining about storage managementAbout the authorGuest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
Likes before 03/04/2016 - 1
Views before 03/04/2016 - 9316
When you hear the phrase �team-building exercise,� what comes to mind? If you're like me, you get an image of a bored group of people listening to a consultant. The consultant asks Person A, who is blindfolded, to fall backwards, trusting that Person B will make a rescuing catch. (In the Hollywood version, Person B is unexpectedly distracted and Person A brains himself on the concrete floor.)
The trouble with this sort of exercise, as I see it, is simple. There is not, in the usual course of business operations, much in the area of wearing blindfolds and falling backwards. It just never comes up.
This being the case, a better team-building exercise would recreate more accurately the specific challenges that people do experience every day in their jobs.
Furthermore, it would do that in a more accelerated and quantified manner than would be possible in real life. That way, any lessons learned could be learned much more quickly than would be possible on the job, and participants could get a sense of just how effective (or imperfect) their collaboration really was.
If you've ever seen the military simulations that fighter pilots use in training, you know what I'm talking about. The basic idea is to give these pilots a way to learn that(a)
closely recreates the real experience of flying a plane(b)
can be executed much more quickly than really flying a plane(c)
assigns the pilot a score, and thus puts performance in very clear terms(d)
doesn't risk the daunting possibility that a stupefied newbie pilot will steer an $80 million Lockheed Lightning plane into a mountain.
What if you could take that basic premise, and apply it to IT complexities -- creating a kind of simulator of them? Wouldn't that be a powerful learning experience, capable of teaching people all kinds of complex lessons in short order?
Well, that's exactly what IBM will be offering at Pulse 2012
March 4-7 in Las Vegas: a Service Management Simulator Workshop
.Are you up for the challenge?
Going beyond the fighter-pilot simulator described above, this Simulator focuses not on individual performance, but on team performance.
The idea of the Simulator is to assemble a team of 15 to 20 players in a room, assign them different job roles and simulate a real-world organization facing typical real-world business and IT challenges.
Then hammer them with those challenges and see how well they do.
The roles vary widely both in terms of hierarchical rank and job duties:
- Senior management (executive team)
- Line-of-business owners
- Operations management
- Service desk staff
- Technical support services
Furthermore, the hypothetical logistics organization where they work focuses on shipping and fulfillment, and like all such organizations, holds itself to an incredibly high standard of performance.
Remember this slogan? �Fed Ex...when it absolutely, positively has to be there overnight.�
You can see the guarantee implied by that kind of language. So the collaboration between team members at this organization has to be as seamless and friction-free as possible to increase the odds of that guarantee working out for the maximum number of shipments.
When issues arise (and the Simulator is just merciless in this respect), those issues have to be isolated to root causes, assigned to the right people and resolved lickety-split. Otherwise, deadlines will be missed and the organization will sustain a quantified business impact.
And since the scoreboard in the Simulator is constantly updated to reflect revenue and profitability in specific dollar terms, that impact will be painfully clear.Pull a few ITIL rabbits out of the hat
Now, it does help quite a bit that this organization (just as real-world organizations) has a powerful resource to draw on in accomplishing all of these goals.
I'm referring to ITIL -- the Information Technology Infrastructure Library, aka The World's Leading Best Practices Framework for IT People. In the latest version (v3), ITIL was updated specifically to address service management issues of the sort you'll find in the Simulator.
However (as those with experience in best practices have discovered for themselves), ITIL isn't really a 1-2-3-4 ops manual. It doesn't talk about particular solutions from particular vendors, or how to use and combine them. It instead talks in more abstract terms about basic tasks (like trouble ticket assignment or server provisioning or resource allocation). Then it leaves the implementation up to you.
So, while team members can lean on ITIL concepts and practices to get a higher score, they'll need to figure out all the details for themselves. Just like they'd have to do in the real world.
A session with the Simulator runs for several hours, and teams will get a chance to play several rounds (each taking about an hour). They're usually going to need several rounds, too.
This is a hardcore, no-holds-barred, spit-on-your-corpse-and-laugh sort of game, and it's not for wannabes. For instance, when stuff goes wrong, and it's always going wrong, a loud horn blares. If you don't like that, or if you find it distracting, too bad. Perhaps you can find a Pac Man machine somewhere in Vegas and play that instead.
But for those who really engage with the Simulator, and make a serious, sustained effort to learn and improve, the payoff will be considerable: a drastically improved comprehension of what it takes to make ITIL concepts fly in a pressure-packed environment that closely recreates the real world.
David Ojalvo, from IBM's Service Management group, can bear witness to that. Watching an early version of the Simulator in 2011, his observation
�After three hours and three rounds, the group was both exhausted and exhilarated� I had a chance to interview several of the participants after the session, and they were all effusive in their praise for the workshop. Clearly, the workshop far exceeded their expectations, and they were anxious to share the experience and apply some of the best practices at their own organizations.�Holding a mirror up to real life
Toward that end -- practical application -- the Simulator has been tweaked to reflect the way organizations have changed in recent years.
For instance, beyond ITIL implementation and service management complexities, it also now incorporates a second organization as well as the logistics company. This second organization is an external service provider that handles some (but not all) of the IT services the logistics organization is responsible for.
If that has a familiar ring to you, ponder the phrase �third-party cloud host� and consider how much more popular those have become in the last year or two. IBM is aware of that development and has taken it into account.
The result is that the game now actually involves two hierarchies, two infrastructures and twice the total required collaboration -- all of which makes it harder than ever. (I told you it was merciless.)
And, of course, the challenges that come up vary not only in nature, but also in timing. So don't be surprised if you get slammed with four different challenges simultaneously, and have to conduct an improvised triage to decide what to do first. This represents a challenge in itself, and it can make or break the eventual score teams get -- just as problem prioritization can make or break real-world businesses.
Maybe all this sounds a little intimidating? Well, it's meant to be. If it weren't, it wouldn't be much of a simulation. But more importantly, when all is said and done, it's also fun.
�In my opinion,� said analyst Rich Ptak of Ptak/Noel
after attending the Pulse 2011 Simulator Workshop, �this was by far the most fun and engaging workshop I've attended in a long time. This opinion was confirmed with other attendees... I wasn't ready to quit at the end of the three hours. I was really involved and want to go for more. If you get a chance, take this workshop, but watch out: the scorekeeper has lots of surprises for you.�
Think you're ready for the Simulator? Register
to attend Pulse 2012 and find out!
The Workshop will be held Sunday, March 4, from 2:00 to 5:00 pm in Room 306, located on Level 3 of the MGM Grand Hotel Conference Center. To receive additional information, email Tivoli Marketing at firstname.lastname@example.org and include the following details: confirmation that you want to attend along with your name, title, email address, and cell phone number. You will receive a return email from David Ojalvo confirming your participation in the session.Additional InformationLearn more about Business Service ManagementFind out what Pulse 2012 has to offerRegister for Pulse 2012Watch this Service Management Simulator Workshop videoAbout the authorGuest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
Likes before 03/04/2016 - 1
Views before 03/04/2016 - 8320
Someone recently asked me how it was possible that cloud computing began to take off at just about the same time the economy got cold
-- circa 2008. This argument had a culinary simile: that major new technology shifts (such as cloud) are like ice cream. Delicious when things are hot, but forget about it when things are not. And cloud rolled along at a not-hot time. Ergo, cloud should have failed.
My counter-argument was that they had the wrong culinary simile in mind.
Cloud computing, or so it seems to me, is not like ice cream in winter. Instead, it's more like a super-efficient oven in winter. Less about the eating, more about the baking. And if you're a baker, baking is a big deal.
So, for instance, let's take the case of commerce. How do you best implement that in a problematic economy?
Well, it stands to reason that any architecture responsible for the flow of business transactions needs to be as efficient, and scalable, as possible. That way, you can minimize costs when demand is lower, and maximize market responsiveness when demand is higher. And the more unpredictable demand is, the more appealing that idea becomes.
Cloud computing services are a perfect match for that description. So it�s easy to agree with a recent blog post I read hat predicted continued steady growth of cloud computing -- even in a �challenged� economy -- over the next five years. With cloud, organizations don't have to shell out for the sum total of the hardware and software of the cloud. They can simply lease someone else's cloud as they see fit, to handle specific requirements they have at any given time. If they need more resources, they can pay for those resources as they go, dialing back at will. More, they can zero in on exactly the services they want, and skip the ones they don't, making changes month by month as circumstances change.
Added up, this amounts to remarkably flexible, granular control over how much they pay to handle commerce over time. It also substantially reduces the business risk that would have come from a private, on-premise commerce infrastructure -- a huge investment that might not pay off at a time when demand isn't something that can clearly be foreseen.
So, to a risk-averse business leader, commerce on a cloud probably looks better in a cold economy -- not worse.Cloud-ifying your commerce architecture can really pay off -- if you get it right
�Probably� is, admittedly, a little dodgy as qualifiers go. So I thought I should probably confirm this opinion with someone who knew better than I did.
A chat with Dave Carmichael, Manager of Cloud Business Solutions at IBM, was a major help. Carmichael's take was similar to mine -- though he also suggested the story was more complex than that.
�Economically, things are volatile right now, and that's having a real impact on the world of commerce,� he said. �A volatile economy brings with it threats; companies need a strategy to handle the threats. But they also need to take advantage of the opportunities that volatile economies have historically presented.�
Opportunities? This was something I hadn't really considered, but on reflection, it makes perfect sense.
If you think of a volatile economy as exerting pressure on organizations, you can see that the pressure probably forces them to take a new look at how they get things done. It acts, in other words, as a catalyst for change: steering organizations toward smarter, more efficient, more capable and more cost-effective strategies.
If the sum total of that change is effective enough, then a volatile economy has, in a practical sense, become an opportunity.
Carmichael sees commerce in the cloud
in much this way. �Cloud can be more than just a part of commerce -- it can be central to business strategies in this area,� he said. �That's because cloud can deliver IT without boundaries, help organizations build enduring customer relationships and, in doing so, transform the economics of innovation.�
How specifically does this work -- this idea of �building enduring customer relationships� via cloud?
Regular readers of this blog may recall that I wrote about the IBM Smarter Commerce
initiative some weeks ago. The idea there was very similar: to put customers at the center of every phase of the commerce cycle, from Buy to Market to Sell to Service. By improving each phase in sequence, the overall customer relationship could be both strengthened and extended.
The IBM cloud commerce strategy is, in essence, a super-efficient, super-flexible way to pursue that idea. Software capabilities brought to IBM via recent acquisitions -- Sterling Commerce, Unica, Coremetrics and ILOG among others -- are now providing the technical foundation of commerce solutions hosted in an IBM cloud.
This means IBM clients can simply pick the commerce solutions they need to get the outcome they want, targeting some or all of those four commerce phases. And when they do, they'll receive best-in-class performance and features without having to worry about any of the implementation and management required by a private cloud architecture.Weigh the pros against the cons
Carmichael was careful to point out, though, that cloud-based commerce -- like everything else in this world -- has its cons as well as its pros. Cloud Commerce ProsHigher business acceleration.
Because you don't have to implement or manage the cloud itself, you can concentrate on what really matters: your services. This significantly reduces the time needed to bring those services to market; collaborate with customers, suppliers and partners; and analyze incoming data in real time to understand, and serve, your customers better. You can also scale services up or down far more quickly than you could without a cloud.Lower business risk.
No capital investment is required in IT infrastructure (hardware or software). Your IT team can worry less about technical details, and more about business strategies. And your total cost for cloud services becomes both remarkably predictable and remarkably adjustable -- helping you dial in just the right commerce formula while keeping a close eye on the price tag.Cloud Commerce ConsLong-term versus short-term costs.
Not leveraging the cloud and looking to an on-premise, private commerce implementation is a huge capital expenditure, but since you own it, it costs less month by month than cloud services over time.Lower customization potential.
If you don't own the cloud, you can't customize the cloud and cloud services -- at least, not to the same degree as if you owned it.
The IBM approach, though based on cloud services, really aims at a best-of-both worlds. It gives organizations the option to stay in-house for some capabilities, but outsource others to the IBM cloud whenever that makes good business sense.
True Value, for instance, decided to leverage IBM Software supply-chain management capabilities in this way. The retailer-owned hardware cooperative�s logistics challenges come as a natural result of their distributed presence worldwide: every year, they distribute more than 600 million pounds of freight to more than 5,000 stores in more than 50 countries.
Via the IBM Sterling supply-chain visibility solution, True Value
was able to establish more quickly, and more easily, where different shipments are at different times, and why delays are occurring -- contributing to a 57 percent reduction in lead time, a 10 percent increase in fill rate and a stunning 85 percent reduction in backorders.
A different organization might find there�s no problem with supply-chain management, but that, instead, analytics of customer purchases are weak. Organizations in this situation could buy analytics services from IBM and call it a day, leaving supply-chain capabilities as is. The IBM idea, in every case, is simply to give customers the best available range of choices.
Carmichael's expectation, though, is that going forward, more and more organizations will pursue an approach to commerce that involves cloud to at least some extent, because the pros will increasingly outweigh the cons.
�Cloud is changing the game for companies, forcing them to rethink their IT so they can reinvent their business,� he said. �Cloud really is one of those once-every-fifteen-years phenomena, like the world wide web, the PC, the mainframe and the typewriter. All of them really were paradigm shifts. And notice that all of them have IBM in common, too. For the last hundred years, we've been helping our clients get the best possible business value from all kinds of change in technology. Commerce in the cloud is no different.�Additional Information
About the author
Guest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 8432
Even as a small child, I knew it would one day be my destiny to write poetry about Business Process Management (BPM). That day has come. I present to you this humble work:
Ignore the process? All too soon
The outcome you'll be ruing.
Because without good BPM,
You don't know what you're doing.
It lacks, I admit, a Shakespearean elegance and lyricism, but I think the essential point is there.
Business process management is all about getting things done well: engaging the right people at the right time to do the right tasks and thus achieve some overarching goal. And unless you optimize that complex process, you probably won't achieve the goal.
This concept -- intuitively obvious though it may seem -- is not always pursued very well by organizations today. Several possible explanations occur to me.(1)
The default human tendency to wing it -- improvise, and see where things go. If we're talking about one guy tying his shoes, yes, he can probably wing it and still get things done on time and under budget.
But if we're talking about an organization of 10,000 people keeping track of fluctuating customer demand for a wide range of products and services, and then meeting that demand in a way that ultimately creates growth and pleases steely-eyed stockholders, then winging it is for the birds.(2)
Process entrenchment. An organization is founded, grows, succeeds and then just�coasts.
It got used to doing things in certain ways, casually assuming they would always work, and it continued to do things in those ways even when the world changed around it. Meanwhile, nimbler competitors adapted more successfully. Instead of becoming the next-generation Amazon.com, one day the organization woke up and realized it was only the last-generation Borders. Here, too, problematic BPM is likely a major culprit.(3)
Fumbled potential for internal collaboration. Suppose senior execs have a new plan; getting that plan implemented by the troops is often slower and clumsier than it ought to be because business processes aren't well understood.
Alternately, ideas from new team members who enter the company at various levels in the hierarchy may be a great resource. But because those ideas are hard to illustrate and communicate intuitively, they don�t usually create real change.Keeping the focus on the people -- not the tech
Directly on point in solving all three of these problems is an offering from IBM Software called Blueworks LiveTM
. This service and capability, hosted and run in an IBM cloud, is specially designed to simplify and accelerate the creation of and/or improvement of processes.
It allows organizations to pay a nominal monthly fee, based on an editor/contributor model, which empowers team members to collaborate with each other in discovering and documenting existing business processes, developing new ones and optimizing both.
Because it executes in an IBM cloud, it spares organizations the need to purchase, deploy and integrate special BPM software of their own. Instead, they can just pony up a little money and be productive in less than one day, using any standard web browser as an interface to the service, and sharing the results with each other in ways they can easily control.
That's not just better BPM; it's faster BPM, and it helps improve business agility in a larger sense. This is a change sorely in need at most organizations today, and also a major factor behind their renewed interest in BPM generally.
Dave Marquard, with the IBM BPM Product Marketing team, made this clear to me. �Given today's larger economic environment -- which is challenging, to say the least -- and the speed of competition today, you either fall behind or become radically more efficient. And if you want to become more efficient, improving your business processes is a great place to start. Ideally, processes should be as optimized as the technology you use to carry them out.�
That made a lot of sense to me. Too often, in business, it seems that the concept of optimization applies largely to the obvious technical areas like processors, system performance or workload efficiency, or the speed of this or that algorithm.
But even added up, all the optimized technology in the world isn't going to drive a more agile outcome if your business processes are sluggish.
Marquard, too, sees the case for Blueworks Live as a business-centric one -- keeping the focus squarely on what the organization does, as opposed to the tools used to do it.
�With Blueworks Live, we're helping teams put business in control of their processes, instead of handing those processes to IT, and then watching as IT spins its wheels for a year, finally handing them back something unwieldy that they can't use easily or quickly, and that's out of date the day it arrives,� he said. �When you've actually got business people involved from the start, they're creating something they understand, and the spotlight stays where it should be: on improving whatever it is you're trying to do.�
Another strong suit of the solution: cross-team, cross-generation, hierarchy-spanning collaboration.
�We're seeing, in our discussions with customers, that a lot of new employees entering organizations come from a Web 2.0-esque paradigm -- the Facebook and Twitter world, in which they can create a new account and get busy in a matter of minutes. But they don't usually get that kind of experience from the BPM tools in place, which are usually pretty cumbersome in comparison,� said Marquard. �That's why we designed Blueworks Live to drive quick productivity in that Facebook-like way -- rapid development and quick time-to-value. New hires can see how things are done right away, be more productive, and maybe even suggest optimizations that would help accomplish things even faster and more easily.�How does 1100% improvement sound to you?
The business risk to Blueworks Live clients is also remarkably low, because so are the costs and time needed to get busy; outstanding ROI is practically assured as a result.
That stands in sharp relief to some of the current solutions commonly used for process development -- you know, the kind that have to be purchased, installed, configured and integrated, often requiring IT to hit individual workstations for every team member involved. In contrast, the instant time-to-value of Blueworks Live translates into a potentially instant improvement in business agility.
Presbyterian Healthcare Services, an Albuquerque-based not-for-profit provider, has had exactly that experience. Until recently, they used a different process visualization tool, and ran into this all-too-common problem: almost as soon as the tool was used, the organization literally �forgot about the results.� Whatever processes were developed, there was little to no sharing, no optimization, and as a result, no significant improvement in the business outcome. It was almost as if the visualization tool wasn't there at all.
With the IBM solution, adopted in 2011, this organization is singing a different tune. Said Doug Johnson, director of innovation for Presbyterian Westside Healthcare System
: �Using IBM Blueworks Live, employees are about 12 times more productive. The key word here is empowerment. Employees are now empowered to create the processes that they need.�
What can your organization do, in one business day or less, to improve employee productivity by a whole order of magnitude?Additional InformationCheck out IBM Business Process Management capabilitiesMore on improving business agility at the IBM Impact blogLearn more about Blueworks LiveSee how IBM Business Process Manager simplifies your complex businessCheck in with the BPM Socialite blogGreat blog post from Neil Ward-Dutton on people and business agilityAbout the author
Guest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 6721
One thing I like about asset management, or in this case integrated workplace management, is that I get to talk about it in heroic language: �You get more control over both space and time.�
Maybe that sounds a little dramatic to you? If so, here's my justification: Asset management solutions aren't just about assets per se. They're also about how assets generate value over extended periods -- their complete lifecycles.
I'm not just talking about the things that you�re familiar with like IT assets. I�m talking about physical and capital assets that are part of a smarter physical infrastructure for manufacturing plants and facilities. I mean the whole kit and kaboodle: IT assets, facilities assets, mobile assets, field assets -- any sort of asset you can imagine. Using asset management tools, you can continually collect information about all of those asset groups, then maintain, configure and enhance all of them as needed over time.
If you do, they will last longer, perform better and contribute more to everything you're trying to use them to do. Thus, you have the capability to obtain more power over both space (all assets, however distributed they may be) and time (years or decades).Drive up the business value of entire buildings, campuses and geographic sites
In recent years, the case for asset management solutions has only gotten stronger. Partly, this is because all of these ideas are now being applied to entire categories of assets that haven't really been managed very well at all before. Take buildings, for instance. Here we have a sort of mega-asset incorporating many subclasses of assets.
It's one thing to talk about optimizing a server; it's quite another thing to talk about optimizing an entire data center full of 5,000 servers, as well as HVAC units, lighting, electricity and plumbing infrastructures, etc., for total business value. This is an area where enterprise-class asset management solutions can really deliver unique value.
Furthermore, you can scale up that argument even further to encompass whole campuses of buildings or geographic sites altogether. Even if you focus strictly on one little slice of asset management functionality, such as energy efficiency, it's plain that most organizations with multiple buildings, or multiple sites, don't really have the visibility, control, and automation they need to optimize the return from facility assets, one of the top four expenses for most organizations.
So when I read in April of this year that IBM had purchased TRIRIGA
, a provider of software solutions for Integrated Workplace Management Systems (IWMS), I wasn't the least bit surprised. It seemed like IBM was rounding out the Maximo asset management capabilities of IBM Software with new solutions designed to increase the business value generated by buildings and campuses in new ways.
When I recently talked to Mary Gorczynski, Marketing Manager for IBM Asset Management, she confirmed this basic interpretation.
�TRIRIGA helps organizations reduce operational costs of facilities, increase return on real estate assets, and mitigate environmental regulatory risks,� she said. �Key functionality includes space and facilities management, energy and environment sustainability, capital project management and real estate portfolio management.�
Of course, there are other solution providers out there in this space, and IBM is well known to have deep pockets. Meaning, IBM could have bought any of several players -- so why TRIRIGA
, instead of them?TRIRIGA: Excellence in both vision and execution
A little digging gave me a pretty good answer to that. Turns out that Gartner, the independent research organization, has positioned TRIRIGA as a leader within its coveted Magic Quadrant
for Integrated Workplace Management Systems this year -- the spot on its evaluation chart characterized by excellence in both vision and execution.
Well, that tells me TRIRIGA's solution delivers not just all the key capabilities IBM would be looking for, but also the depth of features in each capability group.
According to Rob Schafer, Gartner's Research Director focusing on Integrated Workplace Management Systems: �TRIRIGA Real Estate Environmental Sustainability (TREES)and its early promotion of FASB (Financial Accounting Standards Board) accounting changes that will have a profound effect on the real estate industry are two examples of why it is the leading vendor on the �Completeness of Vision� X axis in the Gartner Magic Quadrant for Integrated Workplace Management Systems�.
Environmental sustainability is definitely on the minds of business leaders today. And TRIRIGA, according to Gartner, delivers the goods: �Designed to collect energy consumption and emissions data for buildings, [TRIRIGA TREES] provides a single, comprehensive repository of environmental data for workplace assets and operations.�
That tidbit about FASB capabilities is also timely. This is all about pending changes to lease accounting rules, changes that are anticipated to put a major strain on financial reporting for public companies. Just collecting all the relevant data is no easy task. Actually analyzing and acting on it to generate the best possible outcome may seem like a Herculean labor.
Fortunately, says Gorczynski, �TRIRIGA provides advanced lease accounting capabilities to manage the vast amounts of data required to comply with these new rules.� Gartner's report, similarly, opines that �TRIRIGA has had an early and valuable focus on the impending FASB accounting change that will likely eliminate the operating lease and have a material impact on the real estate function within most large organizations.�
Of course, as strong as the TRIRIGA portfolio may seem, it's important to bear in mind that it's only one part of the IBM Maximo asset management suite < http://www.ibm.com/software/tivoli/products/maximo-asset-mgmt/ >, which is so full-featured you may sometimes find yourself wishing it applied to everything in your life.
If you check outa recent blog entry
from Gorczynski, for instance, you'll learn how she wished it applied to her lawn (which, though green, is not perfectly maintained at all times). You can also watch a closely related video
-- and if you'd like to make one of your own, you can submit one. If it attracts enough praise, IBM will then cheerfully promote it, and you, next March at Pulse 2012
, its service management event, to be held in Las Vegas.Additional Information
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 7712
Customers of the world, unite! Recently I was discussing high tech with my mother, who continues to hold out hope that computers are only a fad. I can't blame her. First impressions are powerful, and her first impressions of high tech came from statistical analyses on a university mainframe using an interface of�punch cards.
This was a joyless experience from which she has only recently made a full recovery.
Even so, I continue to tell her that modern tech has its selling points.
Recently, for instance, I've pointed out how, because of the Web, consumers like her are way more powerful than they used to be. In the past, consumer feedback was, at best, a faint melody, heard dimly by business leaders. Today, thanks to the Web, that melody is routed through a 100-watt Marshall guitar amplifier, and it is loud enough to blow down the doors of companies that get things wrong.
Want an example? Ponder the case of Netflix.
In 2010, this company was a darling of the movie-watching public. But in 2011, thanks to a string of PR and pricing mistakes that were echoed instantly all over Twitter, Facebook and the Web generally, Netflix has lost close to a million subscribers. Its stock is sinking
, down about 75 percent just since July.
But for Mom, it's great news -- it means that if she complains, companies are a whole lot more likely to sit up and pay attention.CMO: Does the C stand for Chief these days? Or Customers?
It was with this context in mind that I recently had a chat with Carolyn Heller Baird, the CRM Global Research Lead at the IBM Institute for Business Value. Baird is the global director for a recent IBM stud
y that tackled a lot of the most important points in this area -- how things are changing for CMOs, what kinds of problems they're facing that they never used to face before and how the smarter ones are coping and even thriving.
More than 1,700 CMOs were queried about this stuff on a sit-down, face-to-face basis, each for an hour. And there was, in addition to that impressively large data set, a lot of diversity among the CMOs; the organizations for which they were CMOing spanned 19 different industries and 64 countries.
So given such a broad spectrum of info, I expected the study's conclusions to be both powerful and far-reaching. They were. I was not alone; check out other reactions at the CMO site
�Clearly, the role of the CMO is evolving beyond just being the go-to person for brand stewardship and relationship with ad agencies,� said Baird. �In this digital age, CMOs need to play a far more strategic role across the whole enterprise. They need to be more tech-savvy, and more dynamic in the way they use the insights they get from that tech.�
So, for instance, there is the question of data analytics. What I was expecting to hear from Baird on this subject was something like this: �organizations need to do a better job analyzing their information to create user value.�
But the story I actually heard was much larger than that. It turns out that CMOs need to acquire more kinds of information, from more sources -- including Web 2.0 sources. And they need to leverage it much more extensively than I would have imagined.
�Anything that comes out of advanced analytics that allows CMOs to not just capture traditional customer data like sales figures, but also understand what people are saying through social media would be a major step in the right direction,� said Baird. �This way, they can get their arms around what's going on in the blogosphere -- in real time -- and actually turn that data into meaningful insights they can act on.�
Baird also pointed out that these customer insights can be directed internally. To wit, they can inform not just obvious marketing platforms like online ads, and not just obvious business policies like what to charge for different kinds of services�but also subtler-but-still-important internal things. Like how their company�s unique values are understood by the outer world, and how this impacts the brand. .
�Values, like everything else these days, are very transparent,� Baird told me. �So it's critical that an organization�s policies and practices � how it behaves � demonstrate its value system and that these jive with the values customers think are important. Those values are propagated throughout the organization in all kinds of ways - everything from how retirees are treated to a company�s environmental policy is now a matter for the CMO because of the impact it has on brand perception. Most CMOs recognize this is an area where there is major opportunity for improvement.�
Value propagation? Wow. That is certainly way, way outside the scope of what I would normally have considered a CMO's job description. But I really like the sound of it. I bet my mother would, too.Track and improve your ROI -- and not just in the ways you'd think
Another major aspect of the way life-as-CMO is rapidly evolving lies in ROI. Not ROI in the way you'd imagine by default -- not ROI of this campaign or that ad agency -- but ROI of a much more personal type to a CMO and related investments.
This is a critical question: How much return is the organization actually getting from its investment in what the Chief Marketing Officer spends?
It turns out that this is another major strength of advanced analytics capabilities. Beyond helping the CMO understand and connect with customers, develop new strategies, verify that the right internal values are in place and propagate those values in many ways, they can also help quantify the value of the CMO�s marketing investments.
And that's a major issue for today's CMOs, who according to the IBM study, are more financially accountable for their decisions, and the success of their strategies at every level of abstraction, than they've ever been before. Some 63 percent of the study respondents said they think marketing ROI will be the most important measure of success over the next three to five years. Sixty-three percent! That's almost two-thirds -- a number that, in a political election, would be considered an historical landslide.
Obviously, those CMOs are going to need some help not just in tracking ROI, but also in improving it rapidly when it's not what it should be. And that means new analytics solutions specifically aimed at enterprise marketing.
In fact, software solutions of this type can go beyond even that. They can actually perform the same function -- ROI evaluation -- for themselves. This struck me as particularly cool: software capabilities that not only do new things that help the organization, but also prove they've helped, in concrete terms. Imagine if every other asset you had did that!
So given this perspective, it seems to me that IBM's acquisition of Unica -- positioned by Gartner in the Leaders Quadrant for CRM Multichannel Campaign Management
-- was not just a good idea, but also rather far-sighted. That acquisition in 2010
directly addressed the life of the CMO today, a full year later, as revealed by this study.
And it also seems to me that if you're a CMO and the challenges I've been describing sound familiar to you -- possibly even intimidating -- you might want to give IBM Unica solutions a close look.
Either way, I pose this question: What kinds of marketing challenges are you facing these days, and what are you doing about them?
(Seriously, let me know. Because customer feedback is important to me.)Additional InformationGain insights from the IBM Global CMO StudyDiscover end-to-end, integrated capabilities Enterprise Marketing ManagementHear more and join the conversation at the CMO siteCheck out the CMO study webinar
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 7640
Ever been instantly sure something new was a good idea? Recently, this happened to me at a friend's house during dinner. She had concocted a new quesadilla -- duck and roasted tomato and apple -- that struck me as a winner right from the description.
Now, granted, I have little knowledge of quesadillas. I have not studied at quesadilla academies. I am not a mover and shaker in the quesadilla world. Nor am I the heir to a quesadilla empire created by my grandfather, the Quesadilla King of Mexico.
Even so. Duck + roasted tomato + apple = world-class quesadilla. No doubt in my mind.
Well, this week I had much the same sort of reaction watching a presentation from the New York Business Agility Executive Forum
by IBM WebSphere VP of Worldwide Sales, David Farrell. This presentation discussed what's standing in the way of business agility, and what IBM can offer to improve it.
And just like that duck-tomato-apple quesadilla, Farrell�s presentation struck me immediately as a winning combination of ingredients.
�Business agility,� of course, is a term that's defined in different ways by different folks. So I'll define it, very casually, like this: the power to change, quickly and effectively, to suit changing circumstances. To create new services, new products, new strategies, and do so in the least time, using the least resources.
This is not a little thing. This is a big thing. And it's a thing that pretty much every company, in every industry, is struggling with, to at least some degree.
Farrell put it rather bluntly like this: �There's a graveyard of failed companies who have failed to adapt, who have failed to recognize what was happening and deal with it in a proactive way. So the stakes are pretty high going forward.�
If you want a specific example of the kind of stakes he's talking about, you don't have to look very hard to find them, either. For instance: following Napster and the rise of digital music and music piracy on a mass scale circa 1999, there were basically two available roads to take:
1. Adapt to the new world and become Apple2.
Stick to the old business model and don't become Apple
Turns out that Apple had it right.
So, in my opinion, does IBM. Business agility
is a complex area; really improving it means both understanding it at a deep level and having the range of capabilities needed to deliver a better outcome. And I don't think there's any single organization on the face of the Earth as well positioned in terms of capabilities, in both categories, as IBM and IBM Software.
Said Farrell: �We're all observing the same thing -- a tremendous amount of change and volatility in the marketplace today. How do we harness all of this change and turn it into a competitive advantage vs. headwind?�
Here, in short, is the IBM answer to that question.Smarter decisions: See what's coming and do the right stuff in the right order
Business success stems, ultimately, from the decisions the business makes along the way -- large and small. But all too often, that decision-making happens without full consideration of the best available data.
This is particularly true in the case of service delivery
-- how services are rendered to users, customers and business partners via IT architectures. The goal should be to replace guesswork with analysis, detect and utilize trends and patterns that guide decision-making, and wherever possible, proactively address problems before they even have a chance to manifest.
This may sound like common sense, but it's not -- at least judging by the way services are typically monitored and managed today. While most organizations have monitoring capabilities, they're still essentially handling outages in a reactive fashion.
A typical pattern: service goes down -- service outage is detected -- root cause is detected -- root cause is fixed -- service comes back online.
Let's compare that to this: service is predicted to go down -- IT takes steps to prevent that -- service never goes down.
You can improve on that still further by adding prioritization to refine the middle stage: IT takes preventative steps in the order that makes the most business sense, based on the anticipated business impact of downtime in different scenarios.
This is not a trivial improvement, but a dramatic one. And it makes the business as a whole much more agile.Smarter processes: Connect the dots, simplify, and help your people collaborate better
Is there similar room for improvement in the way business processes typically work? You bet there is.
For instance, think of what happens when new software is created and rolled out -- software of the kind that drives all kinds of services to paying customers. This is usually a pretty clumsy process, and those paying customers are at the receiving end of the clumsiness.
See if this sounds familiar. The development team comes out with a build; the ops team deploys the build; customers try the new software and soon report that the build has problems. Then the development team comes up with a new build and the cycle starts all over again.
Well, so far so good, but the problem is that there's usually not much collaboration between the dev team and the ops team. Perhaps a new build requires a new set of Java libraries -- are the ops guys aware of that? If not, well...oops.
Similarly, if customers report problems to ops, is that information really getting back to the dev guys as fast as it could? Does that transfer of information resemble a cheetah? Or a glacier?
A much, much more agile process emerges when the dev guys and the ops guys share their information collaboratively
, at all times, so that they're both aware just what's needed to render a better experience to customers. Which, really, is the whole point.
Smarter service delivery: Use the most efficient, scalable technologies
Here, we're talking in large part about delivery platforms themselves: the actual infrastructure that renders services to the people who use them.
If you've paid any attention to IT journalism in the last five years, your mental predictive analytics might tell you where I'm going next: cloud. That's because the improvements in agility you can get via cloud computing are absolutely stunning.
If you want a clear example, just look at provisioning times for cloud-based virtual servers. Manual provisioning, in a distributed architecture, typically takes days or weeks for a server cluster. Automatic, policy-driven provisioning in a cloud? Not weeks, but minutes. And every one of those virtual servers will be set up in exactly the right way -- no inadvertent mistakes. This is about as agile and accurate as you can get.Mix and match to suit your needs
Now, in the course of trotting out my arguments, I have simplified things quite a bit. So let me add that there are a lot more capabilities, in each of these three areas, than the ones I've discussed.
And let me also add that if you work with IBM Software to improve your overall agility, you're basically able to order as if from a Chinese menu -- any capabilities you want, in any of the three areas, selected to address your particular needs and goals.
But any road you take, I think you'll struggle to come up with anybody but IBM who has all the experience, and all the solutions, you're going to need.
And that's why I say IBM is simply better positioned to help organizations improve their business agility, regardless of their specific context, than any other single IT provider you can name today.Additional Information:Learn more about Business AgilityBusiness Agility � Predictive Business Service ManagementBusiness Agility � Collaborative Development and OperationsRead about Business Service Management
Attend an IBM Business Agility Executive Forum near youView the Information Week Webcast on Business AgilityAbout the authorGuest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 6277
IT professionals -- and I say this with compassion, having been one myself -- tend to think way too much about the T, and not nearly enough about the I.
What do I mean by that? I mean that while technology certainly drives business services, it is not, ultimately, the most valuable player on the IT team. Information -- data -- is.
Data suggests new strategies, quantifies their success or failure, and informs virtually every operational decision (whether it's made by a person or a processor). It's probably not going too far to say that, in a large sense, the fundamental mission of IT is get the best possible use from data throughout its lifecycle.
And while structured data, like core databases, usually gets most of the time, energy and money; it's unstructured data that comprises some 80 percent of the total in a typical enterprise
. This is not the tip of the iceberg, but the hidden bulk of it.
Think of all those Word files, presentation decks, spreadsheets, and PDFs. Think about case notes written up hastily during a phone call; they may never make their way into a database, yet can contain incredibly powerful information. Think of the sum total of data created daily in internal communities, forums, wikis and other collaborative social platforms -- an area that's certainly hot and getting hotter by the day.
Is the enterprise really getting, as I put it earlier, the best possible use from that data?
The answer is almost certainly no, and the consequence is almost certainly diminished agility, creativity, innovation and responsiveness -- all key for the enterprise to succeed.
This is the heart of the argument for Enterprise Content Management (ECM) solutions. By acknowledging the crucial importance of unstructured data, and leveraging it for as much value as possible, organizations can put themselves in a much stronger, more informed, more competitive position going forward.ECM solutions must evolve with the changing times
Not all ECM solutions are created equal, though. And not all ECM solution providers have the depth of insight, or provide the mature capabilities, that the enterprise will need for best results.
I recently had a chat with Craig Rhinehart, Director of ECM Strategy and Market Development for IBM, (check out Craig�s ECM blog
) and he agreed on that point, calling out that IBM has been developing leading ECM solutions for nearly 30 years and first published research on the topic in 1957, over 50 years ago. That�s longer than most IT professionals have even been alive.
And as enterprise infrastructures, content types, strategies and goals continue to evolve, he told me, IBM Software is continuing to evolve its ECM capability and portfolio in parallel, keeping close pace with the changing times.
�Actually, ECM has never been more relevant than it is today,� said Rhinehart. �These solutions can drive value in an organization's most valuable processes. Think of insurance claims, for instance, they're really the make-or-break center of everything an insurance organization does. And claims processing typically revolves around many forms of unstructured data in the context of case management. All driven from the need to deliver better service to their customers in a highly competitive market. So our ECM solutions are a perfect match.�
That's a value proposition that's becoming more and more applicable over time, too. As unstructured content continues to expand in volume, and diversify in nature, major challenges for enterprises emerge in managing it all -- challenges that will often demand a new approach to ECM.Five great ways to squeeze more value out of your unstructured data
�These challenges really come down to five different areas where we're seeing customers have problems,� explained Rhinehart. �It's within them that content management gets applied and customers are seeing value.�
One such challenge is document imaging and capture
-- basically, grabbing data from non-digital sources, like faxes or snail-mail, then sharing it and managing it in all the ways that digital solutions do best.
This is the sort of thing that can really generate tremendous value if it's done right. I once worked at a state government office where a team of more than 50 lawyers was chartered with responding to all snail-mail questions in two days or less -- no matter how complicated those inquiries might be. Given a turnaround time like that, efficient imaging and capture tools were critical to getting the job done, both right and on time.
And that's just scratching the surface, according to Rhinehart. �There's a global logistics company
using IBM ECM production imaging technology to process 600,000 pages per day,� he said. �They expect to process 4 million per day when the rollout is completed. And already, they move shipments across borders with 30 percent fewer resources than before. Really, any company has too much paper -- it's a great opportunity for enterprises to reduce cost and risk.�Social content management
is another area where ECM capabilities can pay off in a major way -- partly because most of this content is extremely unstructured by nature. Collaborative platforms have typically been developed with a focus on empowering user communication, and rightly so, but it's important that all their content still be connected effectively to the organization's repository of record.
�It's the Wild West right now,� said Rhinehart. �If customers don't have a social content strategy today, they need to get one pretty soon. And we at IBM are certainly investing in that area. We think of it as a sea change in business and we plan to continue to lead the way.�Information lifecycle governance
is a third area where ECM solutions can play a hand. Here, the focus falls on how information is managed throughout its lifecycle, in accordance with its business needs and other variables such as regulatory and legal obligations.
For instance, by identifying information of lower priority, then moving that to storage infrastructure of similarly lower cost -- migrating it from, say, disk arrays to tape or optical media -- organizations can preserve what they need, yet drive down the associated operational overhead. It also becomes possible to identify what isn�t needed at all, eliminating it from the complete information infrastructure and freeing up much needed storage resources in the process. Rhinehart adds that �our solutions help our customers dispose of information in a defensible manner. You can�t just hit the delete key�
ECM solutions can add value by automating and optimizing those processes that are content centric. This is Advanced Case Management (ACM)
. According to Rhinehart, �ACM helps by addressing the ad-hoc, exception-oriented business processes where collaboration is key and where getting the right decision made is the desired outcome. Traditional BPM solutions aren�t the right approach for these processes. You wouldn�t want to use a shovel to drive in a nail. ACM enables a more dynamic solution development process avoiding many of the issues that make rolling out new applications a lot slower, harder and costlier than it should be.�
Some organizations may describe ACM solutions as dispute management, customer service resolution, care coordination, interventions or even claims processing. These cases are not a typical straight-through process. They involve invoices, contracts and other forms of enterprise content and tend to be customer centric. We have a major retailing chain that's doing this and they're now saving US$2.1 million a year in their call center on labor savings alone
Finally, content analytics
can provide some of the most interesting, and potentially explosive, possibilities for unstructured data in the enterprise today. Just as traditional analytics tools focus on database-driven content, ECM analytics capabilities focus on unstructured content -- surfing through it for patterns or trends, that (once implemented as strategies) can create new business value.
Rhinehart seems particularly impressed with the strides IBM has taken in this area in recent years, as exemplified by the success of the Watson project -- best known for having defeated Jeopardy champions in head-to-head, real-time competition.
�Watson uses IBM Content Analytics technology that is commercially available today for natural language processing. It�s being used to leverage and exploit enterprise content by understanding business insights currently trapped in content. Content Analytics is being used to detect fraud, solve crimes, improve healthcare research, find new business opportunities, understand the voice of the customer and more. Think Business Intelligence for content.�
I share his appreciation on both Content Analytics and Watson. Watson not only comprehends natural language queries, but also leverages many different analytics algorithms, running in parallel, to arrive at answers deemed likely to be accurate. This is well beyond the scope of ECM, or even enterprise IT as a whole, as it exists today.
�When you can pose questions to a computer in natural language, that's just a whole new ballgame --that�s something IT has never even tried to do before,� said Rhinehart. �I've heard it said that every computer before Watson is nothing but a big calculator. And I think there's a lot of truth in that.�Additional InformationLearn more about Enterprise Content ManagementCheck out Craig Rhinehart�s blogCheck out the Enterprise Content Management blogGain insight into the ECM Forum at Information On Demand 2011
About the author
Guest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 12916
What's the face of business today? Just 15 years ago, the answer to that question might have been �the retail presence� or, in a few rare cases, �the celebrity CEO.�
But today, the best and most common answer is, quite simply, the web.
Doubt my conclusion? Consider the numbers. Recent statistics suggest that there are now more than two billion Internet users worldwide -- and as new platforms emerge, that number is rapidly escalating. Cell phone subscriptions have topped five billion
(and you can be sure that before long, every cell phone on Earth will be web-capable).
Facebook alone has in the ballpark of three-quarters of a billion users
. Twitter isn't terribly far behind at more than 200 million registered accounts
If you really want to engage your customers, you have to go wherever your customers are -- not just pull them in the direction you want them to go.
More, you have to provide a compelling, consistent experience for them, one that focuses on value as they define and perceive it, not as you do.
That means a lot more than just the company site (at least, the company site as it exists at most businesses today). It means leveraging the web as a whole to attract and interact with past, current and future customers. And it means connecting with them in a much broader sense than simply e-commerce. I'm talking about actually learning from them and using that information to serve them better, as well as empowering them to interact with each other.
Pursuing all those goals in parallel, though, will typically require more capabilities than organizations have at present. What's needed is a core platform that's smart enough, and flexible enough, to support business strategies, link customers, scale to unpredictable demand levels and expand over time to address new ideas going forward.
And I'm not alone in thinking that -- which is why leading IT providers are quickly introducing powerful new software solutions designed to deliver exactly those capabilities.
�We have a lead offering in this space,� said IBM Program Director and Chief Strategist for Web Experience Software Nicole Carrier
. �It's called the IBM Customer Experience Suite. And it provides a foundation for organizations that want to deliver these very compelling, differentiated, socially infused, mobile-aware experiences.�
Let's tackle each of those adjectives.
Compelling and differentiated. �Compelling� is often used in modern marketing simply to mean �good.� But in this case, the original definition applies as well: capable of beckoning, of bringing customers in. Toward that end, one key factor is personalization.
�Organizations can really differentiate themselves from the competition, and improve customer loyalty via an experience that is personalized to customer needs, to their behaviors, to their preferences, as well as the language of their choice,� said Carrier.
If you want an example of what she has in mind, consider Lufthansa Airlines -- an IBM customer. This organization has moved away from delivering a stock, one-size-fits-all web experience to an experience carefully tailored to each specific customer�and in every respect Carrier described.
�The first time you get there, it asks you for your country, your language; it's personalized for more than 80 countries and 12 languages,� she said. �If you log in, you can then see your information, [such as] your flights, your awards, all your content tailored to your preferences.�
Why is this crucial? It stands to reason that when sites understand customers better, they can serve customers better. And with better service, better business outcomes will emerge for organizations.
Mobile-aware. Another important form of personalization: the Lufthansa site is now device-and-mobile aware. It now recognizes which type of device a customer is using, then renders to that device a version of the site that has been tailored to the device's strengths and weaknesses
So, for instance, if the user happens to be on a smart phone -- typically characterized by both limited screen resolution and lower-bandwidth connection rates -- the Lufthansa site knows that and takes steps to compensate.
�You're not going to see a huge site that you need to scroll around with, that's hard to use,� said Carrier. �You'll get an experience optimized for the form factor of your phone.�
Socially infused. Beyond customer-by-customer personalization, though, another angle to consider is the social web. Sites that engage with customers, Web 2.0-style -- instead of simply selling to customers -- will almost by definition deliver a better customer experience.
You can think of this in terms of service management theory, if you like. Service management is all about aligning your products and services as closely as you can to what your customers need and want. But how can you do that if you don't know what they need and want
Socially aware sites answer that question by providing a microphone for customers to speak up.
�The social web has really been a great equalizer in terms of getting customers' voices to be heard,� said Carrier. �It could be as simple as allowing users to express their feedback by commenting or rating or participating in forums and communities on your site.�
Simple trumps pretty
Customers, just like water and electricity, will typically follow the path of least resistance. That means no site, however powerful or sophisticated it may be in theory, will satisfy customer needs if it's hard to use. Just as the GUI replaced the command line a generation ago, easy-to-use sites are rapidly replacing those that seem to customers to amount to a barrier of entry.
In my own experience, this principle is so powerful, it can even outweigh another -- that unusually good-looking sites attract more user attention and create more business value for organizations.
Carrier sees things in much the same terms.
�I've seen a large number of sites where they're absolutely beautiful, they're like completely flashy, and you go there and try to get something done, but it's so hard to navigate; it's really hard to get the task accomplished,� she said. �Organizations need to focus on ease of use and making sure that the actions people want to take when coming to your site are as easy [for them] to execute as possible.�
Open pudding, find proof
So what, exactly, has been the business outcome for organizations that have deployed customer-experience web solutions with the capabilities of the type Carrier cites from IBM Software -- solutions that �provide a foundation for very compelling, differentiated, socially infused, mobile-aware experiences�?
�There is a large county government located here in the US that basically needed to make it a lot easier for citizens and various associations to do business with the county without having to drive all the way into offices or fill out tons of paperwork,� said Carrier. �They built this really exceptional web experience and eliminated a bunch of information silos and presented information in a really nice, aggregated citizen's view instead.�
And what kind of bottom-line benefits are they getting?
�Well, just last year from January to April, they had 10 million visits and collected $468 million in revenue.�
I'm not sure �wow� is a strong enough word.
How does your company�s face compare?
About the author
Guest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 9537