This blog post is written by Jackie Zhu and Ed Stonesifer.
Modified on by Jackie Zhu
IBM Content Manager OnDemand (CMOD) provides enterprise-wide report management solution for computer-generated reports and many other type of content. The most recent release of CMOD added key features to help companies to meet compliance requirements:
- Supports individual documents holds through the new Enhanced Retention Management feature
- Supports integration with IBM Enterprise Records to provide full records management functions.
Enhanced Retention Management
The new feature, Enhance Retention Management is one of the most valuable features created for CMOD. It provides an immediate process to find and hold documents to prevent normal document deletion process. It enables you to lock down individual documents within a report that is managed by time-based retention.
This is critical to a company because any company might be forced to go through a legal inquiry. When that time happens, all documents related to the inquiry are required to be retained and not deleted during the inquiry process.
With the new Enhanced Retention Management feature, CMOD provides document retention management with one or more of the following ways:
- Time-based retention at the application group level.
- Efficient document deletions for different media types.
- Native support for putting individual documents on hold.
- Integration with IBM Enterprise Records.
Applying hold on documents
There are many ways to hold documents captured and managed by CMOD. One way is to first search for the documents and select them; then, you click the Action drop-down box and select Apply Hold to hold the documents. See the figure below. The user interface used for CMOD in this example is powered by IBM Content Navigator. When putting documents on hold, you also specify the hold reason.
Things to know about CMOD holds
- You can put documents on hold through a CMOD Windows or web client.
- Documents can be held base on different reasons, for example, legal investigation.
- Holds PREVENT documents from being expired or deleted but they do not change or manage document expiration policy.
- You can put a large number of documents on hold at once.
- A document can be put in multiple holds for multiple reasons. Only when all the holds on a document are removed, can the document goes through the normal expiration process.
- Implied hold enables management of document retention by an external system.
For Content Manager OnDemand related blog posts, see:
For more information on Content Manager OnDemand, see IBM Redbooks publications:Edward E Stonesifer is an IBM Executive Technical Sales Specialist with Mid-Atlantic ECM Business Unit in US. He works with IBM Content Manager OnDemand specializing across all IBM eServer platforms.
Jackie Zhuis an IBM Redbooks Project Leader. She works with technical experts around the world to create books, guides, blogs, and videos.
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 6588
This post is contributed by Tajunnisa Kamalapuram, Marketing Manager - IBM Security Solutions.
Today's users want to access information whenever they wish and wherever they happen to be, prefer to use their favorite device for both corporate and personal activities. Customers want access to your network or your cloud to make purchases, find information or use applications. While these are great business models and a boon for productivity, but without proper safeguards in place, can put your organization at risk like data breaches, unauthorized access to critical information, So the primary reason for slow adoption of cloud and mobile technologies is the perceived challenge around security.
IBM Security Access Manager for Cloud and Mobile is an important milestone for IBM Security Systems, as it can support you to keep pace with the rapid security needs that emerge from new age technologies like cloud and mobile.
IBM Security Access Manager for Cloud and Mobile helps you adopt cloud based services, leverage single sign-on for secure information sharing across cloud environments. Using IBM Security Access Manager for Cloud and Mobile, you can implement a powerful identity mediation service for Cloud, SaaS and web services, while reducing administrative costs, establishing trust and facilitating compliance and built-in Mobile OTP authentication for increased identity assurance and ability to integrate with 3rd party strong authentication vendors.
IBM Security Access Manager for Cloud and Mobile extends user access protection to mobile and cloud environments using federated single sign-on, user authentication, and risk based access depending on location, device, and access pattern.
With IBM Security Access Manager for Cloud and Mobile you can
- Risk-based access helps you in fraud detection and prevention using user attributes and real time context (e.g. location, device)
- It improves productivity and gives you the ability to leverage new business models without worrying about security.
- Ensure authorized users have access to the applications, data and tools, while block unauthorized access
- You can protect sensitive data and ensure compliance when sharing information across trusted and untrusted external locations/applications.
- You can change and control security policies centrally to quickly, consistently and efficiently address compliance requirements
- You can eliminate the need to provide multiple user IDs and passwords and provide seam less single sign-on experience to users/customers.
- Set and enforce policies for who can access what information, when from what locations and how much can be accessed in a set time period.
- New federation first steps on-boarding capabilities for Salesforce, Workday, Office 365 and Google Apps
For more information on how IBM can help you better manage and secure both enterprise and end users visit http://www.ibm.com/software/security/products/samcm/.
Get more security news by following @IBMSecurity on Twitter.
** UPDATE **
Join IBM experts on cloud security Thursday, November 8, 2012 at 12 PM ET for a podcast, "A Framework for Securing the Cloud." Hosted by Caleb Barlow. This event will also be available on-demand here.
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 6473
Modified on by MartinKeen
How many times did you use CICS Transaction Server this year? This week? Today? Unless you're already familiar with IBM's 43 year-old transaction server, you might be scratching your head and thinking �I've never used it!�.
Have you had lunch yet? If so, did you pay with a debit or credit card? Then you've used CICS. Did you pay for lunch with cash instead? CICS entered your life then too � when you went to the ATM to withdraw the money.
And you're not alone. CICS Transaction Server handles a dizzying number of transactions every day. More than 30 billion transactions a day in fact (and at least three CICS customers are exceeding one billion transactions a day each). In the course of a week, those transactions are valued at over $1,000,000,000,000 (that's one trillion dollars). Every single week.
Almost every commercial electronic transaction that you make is processed by CICS. Consider the transactions involved in taking a business trip by train. You'll search for available travel times, book a train ticket, purchase travel insurance, and check in to a hotel room. Each one of those transactions needs to be completed quickly, securely, and reliably, and it's CICS Transaction Server that's behind them all.CICS and System z: perfect partners
So what is CICS, and how is it still so relevant after 43 years? It's a transaction server that runs primarily on the IBM System z mainframe. System z is well known for its high availability, averaging about 5 minutes of downtime per year (by combining System z mainframes, that downtime is reduced to almost zero). Of the world's 25 biggest banks, all 25 use System z. A single System z mainframe is highly scalable � it can comfortably run over a thousand virtual Linux images on a single box. CICS Transaction Server is designed to take full advantage of the System z platform, controlling the interactions between applications and users.
CICS provides applications with an extensive range of system services, such as security and transactional integrity. Application programs written for CICS use an application programming interface (API) to request these CICS services. The CICS API is provided in multiple languages, from COBOL to Java. There are APIs for presentation services (for user interfaces), data services (for retrieving and updating data), and business services (for manipulating data).Out with the old, in with the new
The real beauty of CICS � and a reason it is still going strongly today � is the ability to separate and reuse business logic. CICS applications that were designed to work with a green screen 3270 terminal 20 years ago can be modernized to support web services today, without making changes to the original business logic of the application. CICS has remained current with changing middleware technologies: CICS has embraced HTTP web servers, Enterprise JavaBeans, Java adapters, and SOAP web services in recent years.
CICS and cloud computing
Today IBM announced a new version - CICS Transaction Server V5.1. This new release addresses over 100 customer requirements � a record for a new CICS release.
One of the improvements continues the CICS tradition of adopting emerging technologies with support for cloud computing. CICS provides operation efficiency and service agility with cloud enablement.
Adopting CICS into your architecture
To learn more about CICS Transaction Server, and how application architects can incorporate the value of CICS into their business, take a look at the newly published IBM Redbooks publication Architects Guide to CICS on System z.
Martin Keenis an IBM Redbooks Project Leader. He leads publications on many areas of IBM software, including WebSphere, Messaging, and Business Process Management. Follow Martin on Twitter at @MartinRTP.
Likes before 03/04/2016 - 1
Views before 03/04/2016 - 11451
Here's a straightforward proposition: Software is more and more critical to the success of business strategies. So it's getting more and more critical to develop that software properly in the first place. Sounds simple enough, right? Just hire good engineers who don't write spaghetti code and who play well with others. Problem solved.
Well, okay, that actually works pretty well for a software startup. At a tiny, new-to-the-world organization, you've got a brand new kitchen to cook in and a very small number of cooks. Project management almost takes care of itself -- the two-topping pizzas zip out of the oven on time and under budget. They taste pretty good, too.
At the enterprise level, however, software engineering can easily go a bit wonky. Ponder if you will the following variables:
- The total size of a codebase � FYI: measured in billions of lines of code
- The number of functional units to optimize and test
- The number of programmers on a project
- The extent to which applications and services rely on each other to work
- The number of years (or decades) in which a particular codebase has gradually and imperfectly evolved
Scale these variables up far enough and you may find you've gone from a simple pizza, perfectly executed, to something else: a monstrous, 50-course, semi-French cataclysm of a meal that nobody ordered, that smells funky and that, if put in front of diners, will be hurled violently back into the kitchen and cost the restaurant its cherished good name.
Well, I can see I've worked my cooking analogy far past its reasonable life expectancy. However, having made my point, I can get to the heart of the matter, which is this:
For the largest organizations and software engineering projects, today's integrated development environments (IDE) are much more than just tools. The IDE is the individual practitioner�s working environment, which is seamlessly integrated to the team wide capabilities. IDEs are collaborative partners -- mentors, even -- that help guide development teams, projects, applications, services and codebases down the road to successful application lifecycle management
and enterprise modernization
.Given a robust, thoughtfully designed IDE, the best practices almost implement themselves
What with Rational Developer for System z version 8.5 < http://www-01.ibm.com/software/rational/products/developer/systemz/ > hitting the streets this week, now seemed like a good time to discuss these and related issues with an expert.
That expert was Richard S. Szulewski, IBM Product Manager for that very offering. Szulewski put matters on an etymological footing that wouldn't have occurred to me.
�Just look at the term IDE,� he said. �IDE: Integrated (that is, you have seamless access to all the facilities you need to do your job), Development (development is far more than just changing the code), Environment (a place from which to not just do your job, but do it effectively and efficiently). That is a lot more than just a pretty editor. That is what Rational Developer for System z offers.�
And in Version 8.5, it offers a more complete and well-rounded rendition of that concept than ever before. The new solution has been designed specifically to help organizations not just get more value from the mainframe, and from their developers, but also get it at a higher level of abstraction -- from development projects themselves.
Consider, for instance, how it addresses the common concern of scalability -- not of the software being developed, but of the project of developing that software. To optimize large-scale project management, as everyone knows, best practices are required, but not everyone actually implements them. A really mature, thoughtfully developed IDE should make that implementation a lot easier.
Szulewski agrees. �Rational Developer for System z V8.5 includes enhancements that ease potential large-team effects as the number of people on development teams using it goes up. The idea is that any given user can access the host as if he or she were the only one using it.�
For instance, consider the way the solution now automatically keeps programmer workstations up to date. Admins can simply upload new configuration files to the System z; once a programmer logs in, if the new file is needed, it'll be downloaded immediately.
That means more cross-team consistency with less effort -- a best practice by anybody's definition. It also means each programmer can spend more time on coding challenges and less on environment maintenance, which in turn leads to more productivity.
Another example of scalability, this one addressing codebase size: programmers can now more easily search for, zero in on and open the specific code modules they want.
In much the same way a Google search provides a preview of the text at a given link, so that you can decide whether to click it, the new Rational Developer for System z generates a code preview. Just mouse over a module, and you can see the first few lines of its code -- it's as simple as that.Write, visualize and test code quickly, easily... and in a way that isn't at all like French cuisine
Enhanced productivity, especially via editor refinements, is another major design strength of Rational Developer for System z V8.5. In the world of software development, editors are holy ground -- such deep investments, in fact, that they compare with religion and politics as reliable argument starters.
Well, the new Rational offering actually includes three different editors, for LPEX, COBOL and PL/l. And strengths that had been limited to the COBOL editor in the past have now been stirred into the LPEX and PL/l editors, bringing them up to par.
While they differ in specific features, what the new editors have in common is the strategic goal of helping developers visually and intuitively understand and navigate the flow of code much more easily. By increasing the time developers stay in editing context, instead of having to wander elsewhere to do various tasks, the new editors also increase the developer's focus on the job at hand.
And the way the three editors have been brought into rough equivalence turns out to be an instance of a larger theme in the new release. �Rational Developer for System z V8.5,� said Szulewski, �includes a conscious effort to get to better language equity in terms of the PL/I and COBOL languages.�
New integrations are another strength. Since organizations often already have fairly well-developed, specific solutions and information repositories that address particular areas, such integration is a great way to leverage those resources more easily and fully -- eliminating the need to reinvent the wheel.
Organizations that already have Endevor, for instance -- a mainframe code management tool -- will find that the new Rational offering can directly display Endevor elements or packages in a tidy, sortable, customizable table.
Code coverage, too, has been improved, making it a much more straightforward matter to visualize how complete (or incomplete) software testing has been at any given point. Straight from the coverage report, it's now possible to launch a view of the source code to see colored annotations that reflect specific testing.
Code review rules have also gotten a tweak for the better, thanks to additional COBOL and PL/l rules and templates in Rational Developer V8.5; you can even now create custom rules using an easy, wizard-driven process. It all illustrates just how serious IBM is about helping organizations pursue best practices through the new IDE.
�Creating an objective means for confirming best practice adherence -- that is what the new code review capability is about,� said Szulewski. �We've made it easier and faster to define what the 'coding practices' you want should look like, and provided an objective way for the individual developer and whole development teams to compare their work against those practices.�
And if unit testing is your particular cup of tea, you'll probably be glad to hear that in Version 8.5, Rational Developer for System z provides an automated unit testing framework, zUnit, which is similar in nature and concept to JUnit for Java and provides similar benefits. Here, too, smart wizards are available to generate COBOL and/or PL/l test cases.
After these test cases are built and run, the execution results can easily be displayed along with traceback information needed to isolate specific issues -- ultimately, helping to bring the software that much closer to a release version that won't remind anybody of French cooking gone horrifyingly wrong.Additional Information
Discover the benefits of Enterprise ModernizationSee what IBM offers for Application Lifecycle ManagementGet up to speed on IBM Rational Developer Version 8.5Watch videos about the features of Rational Developer for System zTry first-hand the new IBM Enterprise Modernization Sandbox, with no installGet more education with IBM COBOL and Rational Developer for System z - Distance LearningVisit the video library of IBM Enterprise Modernization Solutions for System zAbout the authorGuest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 10066
It always surprises me to see tremendous potential go almost completely unrealized and undeveloped. A specific example: Recently I saw a YouTube video featuring a guitar worth somewhere north of a quarter million bucks. Yet the dealer who was trying to sell this guitar had recorded himself playing it with... a standard handheld video camera.
And I thought: �You know, if you want to sell a guitar worth $250,000, maybe you should record it with a microphone that costs more than $0.25.�
Plunking down a few dollars more for a good mike would have made a world of difference to this guy�s sales prospects.
A similar argument, or so it seems to me, often applies to IT. Platforms are bought with a particular purpose in mind, and used for that purpose, but a relatively small added investment might radically increase their total value.
Take the case of IBM Power Systems. This platform offers an extremely advanced processor architecture, IBM's RISC-based POWER 7; advanced operating systems, including AIX (IBM's flavor of UNIX); top-tier virtualization capabilities that allow IT to allocate resources and manage whole workloads fluidly and a host of other strengths too numerous to list here.
So organizations that have made the investment in Power Systems certainly know what an outstanding IT service delivery platform it is. What may not be as clear to them, and should be, is what an outstanding IT development platform Power Systems can be as well.
on Power, they can not only make their investment pay dividends to both sides of the development/operations divide -- creating and deploying better software, faster, and yet with lower costs and risks -- but also take major steps toward enterprise modernization
FYI, RD 8.5 for P7 is IDE: TNG
That argument got a lot stronger this week because IBM's own integrated development environment (IDE) for this platform -- IBM Rational Developer for Power, Version 8.5
To get a sense of IBM's thinking in this area, I had a chat with William T. Smith, Market and Product Line Manager for IBM's Development Solutions for Power Systems Software.
�We saw that many customers were developing their AIX or Linux on Power workloads on some other platform and then porting to AIX, often without optimizing them for Power,� said Smith. �And we were concerned to see them spending premium dollars for Power's unmatched price-performance profile and other unique qualities of service, but then failing to fully exploit those. Many of them are still using green screen or textual tools, or spending time cobbling together and maintaining home-grown OSS-based tool stacks, and therefore not realizing the productivity and other benefits of using Rational Developer for Power. So our goal for Version 8.5 was to have Rational Developer for Power start to play a central role in helping customers exploit AIX and Linux on Power to their fullest.�
I knew exactly what he meant by �green screen or textual tools� because I recall using such IDEs in my distant youth. And the memory gives me no pleasure. There was not very much troubleshooting and productivity, and quite a lot of vertical scrolling and swearing.
It seems to me that last-millennium development tools like that are bound to act like an anchor hung from the development team's neck -- not really the best choice if the goal is to increase business agility. Which, for most businesses today, is a very familiar goal indeed.
But the new Rational offering goes far beyond graphic visualization, which has been part of the solution since 2010.
�This new release delivers three main new capabilities: a new Performance Advisor, a new, highly scalable code coverage analysis capability and a new Porting Advisor,� said Smith. �Together these raise Rational Developer for Power's value proposition in the AIX and Linux on Power space in a profound way. Rational Developer for Power becomes not just an IDE, but an Integrated Development, Porting and Optimization Environment.�
Let's suppose you happen to be a company that has already deployed IBM Power Systems. By deploying the new Rational IDE as well, you can...
- Generate applications that are really optimized for the POWER architecture, and run faster, with more stability
- Simplify moving your applications across platforms
- Identify and eliminate software bugs much more rapidly and easily
That strikes me as a winning value prop. And if you happen to be an organization still using green-screen tools, such as Smith describes above, well, your programmers will likely clap you on the back and buy you a beer. Their professional lives will have taken a gigantic step forward into the intuitive, graphic development interfaces of the 21st century -- a very good place to be.Do you feel the Power?
Let's talk briefly about the optimization capabilities. Among the new features of Rational Developer for Power 8.5, perhaps the most compelling is the new Performance Advisor. This provides key insight needed to leverage Power strengths to the max -- not just in terms of analysis and tuning, but also by performance data management in a larger, more holistic sense.
You can, for instance, directly compare profiles of different builds to identify slowdown, drilling down into the details (like the time spent executing different functions within those builds). You can generate intuitive scorecards that illustrate real-world performance at a glance. You also get recommendations for future changes, each one assigned an estimated probability that the proposed change really will pay off.
How's that for �key insight?� It's no wonder that in its eight-month beta period, this capability was unanimously praised by all participants -- including many non-IBM organizations, of course.
Smith thinks very highly of this particular innovation as well.
�The Performance Advisor really is something new and unique. Unlike other performance tools, it is very much designed for the development generalist, but it is also fueled by deep performance engineering expertise that reflects intimate knowledge of the internals of the Power architecture, the operating systems and the compilers,� he said. �And in addition to being driven by expert advice, unlike other tools, it is also workflow-driven and deeply integrated into the IDE so that you can easily and naturally integrate the discipline and tasks of performance tuning into the routine development cycle.�75 percent of the Earth is covered by water -- IBM Rational covers the rest
Another major attraction: the new code coverage analysis for C, C++ and COBOL (on both AIX and Linux).
Now, there are lots of code coverage solutions out there. They all help dev teams establish how thoroughly code has been tested and therefore how bug-free and feature-complete -- in short, production-ready -- it really is.
What the new IBM solution offers is exceptional scalability of code coverage. No matter how large the codebase, the builds or the test coverage goals, Rational Developer for Power 8.5 is up to the job -- all with little to no perceived impact on developer productivity or application execution time. And in the enterprise, where the codebase and coverage requirements often trend very high, that kind of scalability is absolutely must-have.
For organizations that are looking to migrate C, C++ or COBOL software across platforms (read: to AIX/Linux on Power Systems), there's also the new Porting Advisor to ponder.
Using this tool, which leverages both static code analysis and expert system rules, developers can discover what kinds of issues are likely to turn up during the port, including such commonplace examples as big-endian vs. little-endian encoding, 32-bit vs. 64-bit processing requirements and signal-handling. Then, given that reconnaissance, the actual porting process can be orchestrated more easily and quickly -- a high-quality transition that results in a high-quality outcome.
Finally, if you happen to be using IBM's System i platform, the good folks at Rational have got your back there, too.
�It's true,� said Smith, �that we did put a great deal of emphasis on AIX and Linux in this release, but that doesn't mean we overlooked our IBM i customers. (And by the way: props to them for seeing the elegance in how IBM i is integrated and optimized to simplify development of business applications.) There are several goodies in this release for them, such as the integration of the Remote Systems Explorer with IBM Data Studio, support for multiple build specifications and a new live outline view for RPG.�Additional InformationSee how Enterprise Modernization helps you get more out of what you�ve gotSimplify your application lifecycle managementGet up to speed on IBM Rational Developer Version 8.5
Try out IBM products in the Enterprise Modernization Sandbox for Power SystemsEstimate your savings with Rational Developer for Power Systems SoftwareWatch videos that highlight features of IBM Rational Developer on PowerAbout the authorGuest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 9824
Love and marriage. Spring and allergies. Bad economies and ROI. These concepts often come in pairs, and for good reason: when the first comes along, we need to pay more attention to the second. This is certainly true for the third example I've listed above. For both individuals and organizations, the question �How can I get the best ROI from the investments I've made?� is mighty popular right now. And for both, the answers are too elusive.
If you're an individual, perhaps you decide to talk to a stockbroker -- a guy who charges you money to tell you things you probably already knew and who probably generates no clear value over time. (It's suggestive that stockbrokers, despite claiming extensive specialized insight going back multiple decades, are practically never billionaires.)
For businesses, fortunately, the situation is a lot brighter. Particularly in the case of a portfolio of applications, there are ways to go about improving ROI that are based on sound and consistent principles. And software solutions which are built on those principles are now available.
That, in sum, is what Application Portfolio Management (APM) is all about. If you think of applications as investments -- which, for organizations, they certainly are, and on a huge scale -- it's very logical to ask: �What kind of return am I getting from my investments? Which ones are vital to my business, is there a scope for consolidation, which applications should I consider retiring? How should I optimize my investments to dial up my total ROI and dial down my total risk?�
APM solutions are particularly attractive to organizations at the enterprise level. That's because the largest organizations have giant portfolios of applications, and getting good answers to the above list of questions is therefore much harder. Similarly, in certain industries like banking, where applications have been in use for an exceptionally long period of time, the idea of introducing change to those applications is going to encounter more cultural resistance than usual. Change may be necessary, but it's really going to have to be justified with demonstrable ROI, if it's going to happen.
When you throw in the problematic economy we continue to face, in which ROI has taken on greater significance, it becomes pretty clear the need for effective APM has never been greater. Yet in many cases, organizations have barely even begun to think about portfolio management in this context.
Recently I was very fortunate to be able to talk to a real expert in this area: Per Kroll, Chief Solution Architect for Application Portfolio Management at IBM. Kroll agreed with me that at many organizations, the time for APM
and also project portfolio management capabilities is now -- and not just because the economy is bad, and ROI is a touchy topic.
�The basic problems have been around for quite some time,� he said. �But they are getting worse every year and have now reached a breaking point. Companies can no longer continue with business as usual. They need to assess the value versus cost of all their current applications.�
Kroll's slant on the cost benefit ratio of current applications is particularly intriguing.
The usual approach to portfolio management in the enterprise revolves around projects -- answering the question: �What is the ROI for this business project we're thinking about undergoing, or have just finished?�
Well, that question is sensible. But it has the effect of shifting the focus away from applications. It ignores the fact that a problematic application's influence can be, and often is, multiplied because it spans multiple business projects.
How do APM solutions help? They put the focus right back on the applications. And they provide a clear, logical path organizations can follow to get more value, and lower risk, from every application in the complete portfolio.
IBM solutions including IBM Rational Focal Point, System Architect and Asset Analyzer can be used to pursue the following steps:
1. Create an application inventory.
2. Provide initial information about each application.
3. Analyze applications and determine which need more investigation.
4. Make decisions, like consolidate, modernize or move to cloud
5. Execute and track on those decisions through project proposals and project deliveryEnhance both IT development and IT operations, and receive more value from every application you have
IBM thus helps organizations optimize their application portfolios the way a stockbroker is supposed to help an individual optimize an investment portfolio -- in a balanced, objective and data-driven way that takes full advantage of proven best practices.
This approach generates many positive effects. And the more creative the company is in using APM solutions, the more benefits it will realize. Some of them are more obvious and some more subtle, but the possibilities really are endless.
Looking for something big and obvious? Think about the 80/20 rule of IT budgets. This says that typically an organization spends 80 percent of its total budget on IT operations (�keeping the lights on�) and only 20 percent on strategic innovation (�doing new stuff to make the business grow�).
What IBM APM solutions do -- unlike certain alternatives -- is deliver value to both halves of that ratio. And particularly in operations, that's welcome news in the enterprise.
�Seems to me like companies have got the 80/20 rule wrong,� said Kroll. �Many implement an objective and transparent portfolio management process only in development, to determine how best to spend the 20 percent of funds going to new projects. But what about the 80 percent on the operations and maintenance side? That's often decided based on a 'who screams the loudest' approach -- the squeaky wheel gets the grease whether it deserves any or not. Our APM capabilities make decisions like that a great deal more objective.�
A subtler, but still powerful, improvement lies in the area of information transparency. APM solutions, once applied, have the effect of pulling key information out of the shadows and into the spotlight, where it can deliver more value through wider utilization (and/or correction or revision, if necessary).
Because it's revealed, that information also becomes more resilient -- surviving the loss of key employees, for instance, who leave the organization.
�Think about what happens when people make decisions about investment levels, modernization targets or which applications to move to the cloud,� said Kroll. �Usually the relevant information is distributed in people's heads throughout the company... or hidden in spreadsheets. APM is about revealing that information (including analytical processes), prioritizing it, and making it all easily available, to anybody who needs it, at the time decisions are made.�
That reference to cloud brings up yet another point. Cloud and APM turn out to be closely related areas because APM-based insights can significantly improve the odds of a cloud's success.
How? Given an application inventory in which each application's context (risks, costs, complexity, etc.) has been quantified and analyzed, that information can be very useful in deciding which applications are the best candidates for clouds and choosing specific cloud models. This is really important, because picking the right set of applications and the right model can make or break a cloud project. APM insight not only helps ensure the chosen applications will scale well in a cloud, but also addresses other factors -- security, for instance, or business criticality -- that definitely need to be taken into account as well.
Furthermore, these same APM solutions can be used to support and enhance many other kinds of initiatives as well, some of which are hot and rapidly getting hotter.
Kroll agreed. �The interest in APM is growing so rapidly right now partly because people need to make so many new kinds of application-related decisions -- not just cloud, but also in areas like mobility, regulation compliance and outsourcing,� he said. �And once you've built an application inventory that captures value, costs and risks, all these decisions are much easier to make. IT's job is not to say 'no,' but to help business establish the constraints and trade-offs at hand.�Innovate 2012
, to be held June 3 - 7 in Orlando, Florida, offers more on portfolio management, enterprise modernization, application lifecycle management and more -- with nearly 400 technical sessions and more than 20 tracks -- to give you insight into how software can help your organization cut costs, drive innovation and reduce risk. Be sure to register < http://www.ibm.com/software/rational/innovate/register.html > by March 14 to save US $200.Additional InformationGet more details on Application Portfolio Management solutionsSee how to align your business with your strategyFind out how IBM can help you with enterprise modernizationWatch the demo on Smarter Application Portfolio Management with IBM RationalCheck out what�s planned for Innovate 2012Register for Innovate 2012 by March 14 and saveRead a commissioned study
conducted by Forrester Consulting, Measuring The Total Economic Impact Of IBM Rational Integrated Solution for Application Portfolio ManagementAbout the authorGuest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 8747
Endpoint management is like a headache looking for an aspirin. Recently I asked my friend Perry -- an IT manager at a Very Big Company -- what endpoint management was like where he works.�Cat-herding,� he said. �But don't you have some sort of endpoint management products?� I asked.
�We use a combo of third-party stuff and the stuff that comes with the OS.�
�And? Don't they help?�
�Well,� he said after a pause, �they make the cat-herding more advanced...�
Turned out that in Perry's case, the endpoint management strategy, though it does a certain amount of herding, also adds to the number of cats.
Consider his rough estimates:
- 24,000 user desktops and laptops
- �Low thousands� of virtual and physical servers -- the number changes every day
- Four fundamentally different operating systems (Windows, Mac OS X, UNIX and Linux -- all in different flavors)
Worse, his endpoint management solution isn't really centralized. It requires quite a few new servers (to handle all the endpoint management) and quite a few agents (a different one for each task like security, anti-malware, software distribution, asset management) deployed on all those endpoints. Pulling all of that together to get things done is cumbersome.
Actually, he didn't say �cumbersome.� I can't print what he did say.Mobile devices are changing the game -- is your endpoint management solution up to the challenge?
Things are getting more complicated, too. With the instant popularity of mobile devices like smartphones and tablets, the number and diversity of endpoints have rapidly scaled up.
That means more operating systems, more agents, more security wrinkles and more compliance challenges to consider -- not to mention the host of human-interest issues that apply to personally owned endpoints.
I asked Perry what his answer was to all of that.
�Same as it was five years ago,� he said. �Be thankful I don't have to do endpoint management stuff any more.�
Well, I couldn't resist telling him about the IBM Endpoint Manager
family, which applies neatly to a typical situation like Perry's:
- One agent for a wide range of capabilities
- One server, capable of handling up to a quarter-million endpoints (almost 10 times as many as Perry's organization has)
- One interface to use in gathering and analyzing endpoint information, as well as carrying out endpoint tasks
You might wonder how that one server is up to the job. The answer: high agent IQ. The Endpoint Manager agent actually leverages the endpoint's own resources -- not the server's -- to handle most of the load of tasks like rolling out new apps, installing security updates, changing firewall settings, tracking the number of licensed copies of software and so on.
And yet it only requires 2 percent or less of endpoint resources, so users don't even notice the agent doing anything. So all those endpoints are no longer cats to be herded; they are instead, a de facto grid architecture that distributes computational tasks evenly and handles them transparently. Pretty slick, no?
All of that came as news to Perry.
What came as news to me, recently, is that the same product family will soon work for those mobile endpoints I mentioned earlier, like smartphones and iPads.Soon-to-be-released IBM Endpoint Manager for Mobile Devices supports four major mobile platforms
With the advent of IBM Endpoint Manager for Mobile Devices
, IBM is tackling one of the biggest shifts in endpoint management in years: the fact that people increasingly want to use (and do use) their own personal devices to handle work stuff.
�We're living in a mobile world,� said Kimber Spradlin, Product Marketing, IBM Endpoint Manager family. �Organizations are going to have to find ways to manage mobile devices, too, not just traditional endpoints like servers and laptops and desktops. And IBM Endpoint Manager for Mobile Devices really makes that job easy because it builds on our current platform, so you get the functionality you need, not the complexity you don't.�
Specifically, it handles devices based on four mobile platforms: Windows, Apple's iOS, Symbian and Android. And because those platforms handle security and management tasks in different ways, Endpoint Manager for Mobile Devices supports both agent and 'agentless' control mechanisms. This way, a single management solution can continue to address all endpoints -- even though some of them don't allow agent installation at all.
�Apple's iOS doesn't,� said Spradlin. �But Apple does provide a management API. So this can be used to handle certain tasks, like partially wiping work e-mails, or calendar data, if the organization needs to be protected from exposure. Android, on the other hand, does allow an agent, so we simply ported our current agent to that platform. In every case, the idea is just to provide the management functionality, and security controls, to whatever extent that it's possible.�
Security does seem like a significant issue; mobile endpoints, by nature, move from point A to point B much more often. And if your smartphone disappears on a vacation, you probably don't want outsiders being able to go through the phone, reading company mail and accessing company resources. That's true whether you're the employee who lost the phone, an IT manager who works with that employee or an exec with a focus on minimizing business risk.
For employees who might be concerned about the sensitivity of personal data, an important point is this: the IBM offering protects you, too.
Suppose your missing phone is loaded with family photos that show your kids, your street address, your pricey new car and other things you'd rather a phone-stealing criminal not be aware of. You can simply request that your phone be data-wiped or access the self-service portal if your company implements that option. And presto, it will be.Create an in-house app store for extra value
Also interesting: Endpoint Manager for Mobile Devices allows organizations to create an enterprise app store. This way, they can offer specific new capabilities for mobile devices in a way that -- just like the security controls -- is of direct benefit to employees.
For instance, organizations might be able to get a significant discount on third-party apps by buying licenses in bulk, and then passing on the discount to employees. �Reduced rate� is a popular phrase when it comes to software purchases.
And, of course, there's a security angle to consider here as well. Employees can download apps from the enterprise app store in confidence that they've already been exhaustively scanned for malware, and are endorsed by the organization as trustworthy. That's not always the case for new apps -- and as mobile device popularity continues to skyrocket, the odds of security-problematic apps go up every year.
Similar value stems from apps that are developed internally. Imagine an organization has a unified asset management solution. Imagine that solution is used in vastly different ways by dozens of different operational groups.
In such a case, the organization might create feature-limited, task-focused apps that target exactly what those groups need to do. These apps could then be offered via the app store for easy downloading and installation to any supported mobile device.
This story gets even more appealing when you consider that, over time, as new versions are released, the older versions installed on endpoints would normally go out of date. That could translate into all sorts of unwanted ramifications, from less-than-ideal performance or stability all the way up to something a lot more catastrophic, like a serious security shortcoming that leads to a breach of company services.
�What you're talking about is endpoint lifecycle management,� said Spradlin. �That's one of the areas where IBM Endpoint Manager shines. For mobile devices using apps like that, it's great to be able to push out new versions -- knowing in advance which endpoints need them and skipping the others. Now, the device owner still has to approve the installation, so it's not completely automatic... but then on the other hand, that user probably wants to know when new apps are being installed, right? So there's a nice balance between the organization's need for risk management and productivity, versus the user's need to be aware of what's on the device and what it does.�
Interested in learning more? Sign up for the beta
and be sure to attend Pulse 2012
in Las Vegas, where mobile endpoint management
will be a major theme providing you with a lot more specific information about this offering, slated for a March release date!Additional InformationSign up for the IBM Endpoint Manager for Mobile Devices betaExplore the Mobility and Endpoint Management stream at Pulse 2012Register for Pulse 2012 todayDiscover how IBM Mobile Enterprise can help you improve productivity, grow market share, drive innovation and enable a social enterpriseAbout the authorGuest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 11137
In my last blog post, I said that one very common idea underlying best practices today is this: �faster is better.� There are different ways to get faster, though. And some are certainly more appealing, in a given context, than others.
For instance, consider the context of IT development. This is a world of business logic, algorithms rendered in specific code and the software development environments, in which the first is alchemically transmuted into the second, to create software-driven services.
Faster software-driven services mean faster (and more) business transactions. This is certainly better than slower (and fewer) business transactions.
Now: What's the most efficient way to make your software faster?
If you're an IT ops guy, you probably see the world through the lens of technology infrastructures. So your response would be something like this:
�We need to buy a faster host. Or, even better, redeploy the app on a grid or cloud architecture. That means we need to get the IT dev guys to rewrite the code so the app's work can be distributed in discrete chunks across that architecture for parallel processing. At that point, to get more speed, we can just add more physical hosts and virtual servers, as well as other resources like virtual storage or network bandwidth as required. Easy as pie.�
But if you're an IT dev guy, you probably got a headache reading all of that, and you see IT ops guys as the enemy. (I'm kidding. Everyone knows IT management is the enemy.)
The idea of completely reworking and redeploying mission-critical applications along these lines sounds slow, risky and impractical. It's difficult enough doing the thing the organization already asked you to do: add new software capabilities to the existing codebase, which was created by completely different guys, at a completely different point in time years ago and intended for completely different hardware.
As far as performance optimization of the whole codebase goes? Well, every neat little trick you might add to the code, to speed it up, introduces the possibility of that app now breaking unexpectedly. And that is a totally unacceptable concept, because your organization depends on the software to create value for customers and thus miraculously make headway even in the current gloomy business climate.
So to you, the IT dev guy, what is the best way to speed up mission-critical software? Ideally, it would involve:(a)
no new coding or code-tweaking required(b)
no new risk that the code will break (because of the clever tweaks you added to speed it up)(c)
no catastrophic service downtime (that creates lots of media attention and generates an estimated $1 bazillion in lost revenue)(d)
no pink slips allocated to IT dev guys, due to the above(e)
no new hardware required
That sounds pretty dreamy. Is it actually possible?Recompile your code, get faster software-driven services
Turns out that it is. I was fortunate to be able to talk to Roland Koo, Product Manager for Compilers at the IBM Software Solutions Toronto Lab, and he gave me the inside story.
�Upgrade your compilers,� said Koo. �Move to better compilers, and all of that can happen. The compiler's job is to make life easy for programmers, so they can focus on getting the business logic right.�
How do compilers
deliver on this value proposition? Just consider what they do -- and how they work. After a programmer writes up business logic in code (using a specific language, like C++ or COBOL), the compiler then cruises through the code, translating it into machine code (processor instructions) for a specific processor. This machine code, in turn, is what actually runs on the IT production servers (or mainframe).
And because compilers are not all created equal, some do a much better job than others at generating fast machine code. The smarter the compiler, the more efficient will be the machine code it generates -- translating directly into faster software-driven services.
In this sense, then, compilers are much more than just one more technical element of a software development. They are the most direct liaison between your software development team, which speaks one language, and the hardware your applications run on, which speaks another. So by investing in superior compilers, organizations can get both superior software and a superior business outcome from it.
Koo put matters even more directly than that: �You cannot maximize your return on investment unless you stay current with compiler technology.�
I have to agree with him. Note how quickly organizations can get that improved ROI: simply install the new compilers, recompile the code as-is and deploy the new applications the compiler generates. No risky code-tweaking is required. No new hardware is required. No new business risk of service downtime is introduced, because the code itself wasn't changed -- only the efficiency of the software.New IBM compilers offer accelerated performance with no hardware upgrade required
Look at how that applies in the case of IBM System z compilers, for instance. System z mainframes run some of the most mission-critical services in the business world -- customer-facing online banking services, for instance. Better performance is always needed for such services, yet customer tolerance for downtime is practically zero.
So banks need a way to accelerate services without introducing new risk. That's exactly what IBM's new System z compilers, for COBOL, PL/I and C/C++, can deliver -- and not just for banking, but for any industry in which mainframe-based services face the same context.
Koo emphasizes that no new hardware had needs to be purchased. �You do not need to upgrade hardware to upgrade compilers,� he said. �In fact, upgrading compilers is a cost-effective way to get more out of existing hardware technology. You can take advantage of new improvements in both optimization and programmer productivity.�
In that second category, programmer productivity, another point to consider about IBM's compiler technology is that it leverages IBM's strengths in related areas, such as development tools, middleware, databases (like DB2) and transaction systems (like CICS and IMS) and modem application development tools such as IBM Rational Developer for System z and Rational Team Concert for Enterprise Platforms providing a high productivity environment for developing business critical applications. Because IBM offers them all, it can also optimize its compilers in ways no competitor can, to deliver even better performance for code that involves IBM middleware via integrated, pre-processor support.
Finally, while hardware upgrades aren't essential to get impressive, measurable business benefits from a new compiler, a new hardware/new compiler combination is unquestionably a great way to go, given the option.
In fact, there is to a 60 percent performance improvement on zEnterprise (the eleventh generation of System z mainframes) for C/C++ applications , when compared to running the same applications on System z10. That's what IBM's own internal tests have shown, and that's probably not too far from what organizations with IBM mainframe-driven services can expect to get as well.
How are you accelerating mainframe applications these days?Additional InformationSee how the latest IBM compilers can help you save moneyRegister for this webcast to see what IBM�s latest compilers can do for youConnect to the IBM Rational Caf� CommunitiesLearn more about IBM Software for System zRead about System z components (including compilers)Read a paper on The Economic Impact of Mainframe ComputingAbout the authorGuest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 7858
This week, I wanted to take a look at enterprise modernization capabilities and efforts in the context of software development. So, in the tradition of this blog, I'm going to begin by discussing a completely different area: electric guitars.
The electric guitar industry is one in which conservative design has almost completely trumped any attempt at modernization for the last half century. In 1954, there were two major companies that made electric guitars: Gibson, best known for the Les Paul
model, and Fender, best known for the Stratocaster
Here we are in 2011, and none of that has changed. Why?
The basic problem is that guitarists are typically less interested in innovation and more interested in tradition -- they want a guitar with vintage feel and tone that, in a perfect world, also looks intensely vintage. They want it so much that today, there are aftermarket services available to �relic
� a guitar: make it look older and shabbier by attacking it with a razor blade, pouring acid on it, leaving it out in the sun for hours, etc.
I am not making this up.
You should also know that the priciest electric guitar you can buy today is a 1959 Les Paul Standard with a sunburst finish -- currently valued, even in this economy, somewhere north of a quarter million dollars.
So it's no coincidence that modern luthiers, well aware of that situation, have responded by creating new guitars that try to recreate, with many improvements, the famous 1959 Les Paul Standard mojo. Such guitars are a sort of synthesis of the best of the old and the best of the new -- and a clever way to get some sort of traction
in a super-challenging market.Balance innovation and convention and get higher business agility
This, of course, brings us to the point of this blog entry. Enterprise developers, particularly those working in mainframe environments, often face a similar conundrum as modern luthiers�and IBM is helping them solve it in a similarly clever way.
A recent announcement from IBM on the subject of business agility
, for instance, included several elements pertinent to this theme of blending-old-and-new-to-achieve-a-better-outcome.
One that stood out to me was this: IBM Software is giving organizations a great new way to test new System z� mainframe applications, one that preserves and even extends the roots of the traditional mainframe value proposition.
This, as it turns out, is really important. Mainframe applications and the services they drive are right at the heart of many leading industries -- banking in particular comes to mind -- and are thus no place for compromise or risk. Furthermore, many organizations have made unusually deep investments in mainframe applications; this investment creates an unusually conservative outlook and a reluctance, even beyond the usual reluctance, to rip-and-replace with something new.
Even so, newness is, to at least some degree, what today's demanding market requires -- the innovation that distinguishes a company from its many competitors. It's also what business agility is all about. What is business agility, after all, but the ability to change, quickly and effectively, in parallel with new strategies or new challenges?Test your System z applications on commodity x86 hardware
And that, in sum, is just what the new IBM Software offering -- IBM Rational� Developer for System z Unit Test
-- can help bring to mainframe developers. Specifically, this technology (which is part of the IBM Rational Developer for System z family
gives developers the power to develop for System z more quickly, more easily and at lower cost -- all of which contribute directly to higher agility.
When I talked to David Myers, Software Product Manager for Rational Enterprise Modernization and Compilers, his case for this new offering struck me as a strong one.
�After decades of investment in green-screen mainframe development and tools, many organizations are looking for a new -- but not too new -- approach,� he said. �We're giving it to them by modernizing the tools and processes they need to become more productive and spur collaboration.�
The way IBM's solution works, in essence, is to create new test platforms not on the System z itself, but on everyday x86 boxes. These serve as isolated, controllable environments in which it's possible to change certain variables, for testing purposes, without the complexity or delay that would've been required to coordinate changes with multiple teams on a System z proper for a development prototype.
�A lot of developers want to upgrade, for instance, at the middleware level to take advantage of new capabilities and simplify development,� said Myers. �But typical organizations have long upgrade timelines � one customer upgrade cycle involves a 14-month process � which could drag the whole project off the rails or require developers to utilize older frameworks which take more coding time to meet deadlines. So, using our solution, they can create that middleware environment on a separate piece of the infrastructure that's off in a corner early in the cycle, as opposed to the centralized environment, to work on applications concurrently to when the official upgrade occurs. They can be prepared to take advantage of new functionality on day one.�It's all good
The benefits of such an approach are clear; the drawbacks, invisible. Consider:Faster build cycles.
By increasing the number of test environments, developers can test software more quickly, release it more quickly into production and start receiving value from it more quickly.Lower costs.
By offloading certain testing functions to x86 hardware, IBM has essentially made it less expensive to develop for the mainframe -- getting more accomplished, yet without spending more money on any new hardware.Higher System z business value.
Because the System z proper allocates fewer resources to testing, it can dedicate more resources to higher-priority tasks -- like revenue-generating production services.Smarter utilization of commodity platforms.
The average hardware utilization of x86 boxes is not so great -- averaging well under 15 percent
in most cases. Instead of wasting processing cycles and power on nothing useful a full 85 percent of the time, these boxes can help with something very useful indeed: testing new applications prior to roll out, to ensure they're as complete and bug-free as they can be.
It seems to me that these last two benefits define really neatly what IBM often has in mind when it talks about �smarter computing,� too -- the idea that instead of buying more of something, you get a smarter use of what you already have.
Myers agrees, �Using x86 hardware when possible for testing basically creates more value at low to no cost. And limited availability on the System z can also be dealt with really effectively that way; if the z is constrained for resources at peak times, you can now move testing jobs to x86, instead of asking the testing team to take a whole day off because of the limitations of their test environment.�
Given these strengths, it seems like IBM clients who develop for the mainframe would be lining up to try the new approach -- and the ones that do would stand to rake in the benefits.
Such, in fact, is the case for ITERGO, a major insurance provider with offices in more than 30 countries. This organization was interested in developing System z applications in COBOL -- among the oldest of all programming languages -- yet doing so in a modern, GUI-based environment suited to the expectations of the new generation of developers.
Toward that end, ITERGO turned to Rational Developer for System z
. And today more than 200 developers there use the IBM solution, leading to a substantial increase in developer productivity and more business value from System z-hosted services.
What's your strategy for enterprise modernization?Additional InformationLook into IBM Software capabilities for Enterprise ModernizationLearn how multiplatform development enables improved business agilityRead the Invisible Thread blog about IBM Rational softwareAbout the authorGuest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 7346
IT professionals -- and I say this with compassion, having been one myself -- tend to think way too much about the T, and not nearly enough about the I.
What do I mean by that? I mean that while technology certainly drives business services, it is not, ultimately, the most valuable player on the IT team. Information -- data -- is.
Data suggests new strategies, quantifies their success or failure, and informs virtually every operational decision (whether it's made by a person or a processor). It's probably not going too far to say that, in a large sense, the fundamental mission of IT is get the best possible use from data throughout its lifecycle.
And while structured data, like core databases, usually gets most of the time, energy and money; it's unstructured data that comprises some 80 percent of the total in a typical enterprise
. This is not the tip of the iceberg, but the hidden bulk of it.
Think of all those Word files, presentation decks, spreadsheets, and PDFs. Think about case notes written up hastily during a phone call; they may never make their way into a database, yet can contain incredibly powerful information. Think of the sum total of data created daily in internal communities, forums, wikis and other collaborative social platforms -- an area that's certainly hot and getting hotter by the day.
Is the enterprise really getting, as I put it earlier, the best possible use from that data?
The answer is almost certainly no, and the consequence is almost certainly diminished agility, creativity, innovation and responsiveness -- all key for the enterprise to succeed.
This is the heart of the argument for Enterprise Content Management (ECM) solutions. By acknowledging the crucial importance of unstructured data, and leveraging it for as much value as possible, organizations can put themselves in a much stronger, more informed, more competitive position going forward.ECM solutions must evolve with the changing times
Not all ECM solutions are created equal, though. And not all ECM solution providers have the depth of insight, or provide the mature capabilities, that the enterprise will need for best results.
I recently had a chat with Craig Rhinehart, Director of ECM Strategy and Market Development for IBM, (check out Craig�s ECM blog
) and he agreed on that point, calling out that IBM has been developing leading ECM solutions for nearly 30 years and first published research on the topic in 1957, over 50 years ago. That�s longer than most IT professionals have even been alive.
And as enterprise infrastructures, content types, strategies and goals continue to evolve, he told me, IBM Software is continuing to evolve its ECM capability and portfolio in parallel, keeping close pace with the changing times.
�Actually, ECM has never been more relevant than it is today,� said Rhinehart. �These solutions can drive value in an organization's most valuable processes. Think of insurance claims, for instance, they're really the make-or-break center of everything an insurance organization does. And claims processing typically revolves around many forms of unstructured data in the context of case management. All driven from the need to deliver better service to their customers in a highly competitive market. So our ECM solutions are a perfect match.�
That's a value proposition that's becoming more and more applicable over time, too. As unstructured content continues to expand in volume, and diversify in nature, major challenges for enterprises emerge in managing it all -- challenges that will often demand a new approach to ECM.Five great ways to squeeze more value out of your unstructured data
�These challenges really come down to five different areas where we're seeing customers have problems,� explained Rhinehart. �It's within them that content management gets applied and customers are seeing value.�
One such challenge is document imaging and capture
-- basically, grabbing data from non-digital sources, like faxes or snail-mail, then sharing it and managing it in all the ways that digital solutions do best.
This is the sort of thing that can really generate tremendous value if it's done right. I once worked at a state government office where a team of more than 50 lawyers was chartered with responding to all snail-mail questions in two days or less -- no matter how complicated those inquiries might be. Given a turnaround time like that, efficient imaging and capture tools were critical to getting the job done, both right and on time.
And that's just scratching the surface, according to Rhinehart. �There's a global logistics company
using IBM ECM production imaging technology to process 600,000 pages per day,� he said. �They expect to process 4 million per day when the rollout is completed. And already, they move shipments across borders with 30 percent fewer resources than before. Really, any company has too much paper -- it's a great opportunity for enterprises to reduce cost and risk.�Social content management
is another area where ECM capabilities can pay off in a major way -- partly because most of this content is extremely unstructured by nature. Collaborative platforms have typically been developed with a focus on empowering user communication, and rightly so, but it's important that all their content still be connected effectively to the organization's repository of record.
�It's the Wild West right now,� said Rhinehart. �If customers don't have a social content strategy today, they need to get one pretty soon. And we at IBM are certainly investing in that area. We think of it as a sea change in business and we plan to continue to lead the way.�Information lifecycle governance
is a third area where ECM solutions can play a hand. Here, the focus falls on how information is managed throughout its lifecycle, in accordance with its business needs and other variables such as regulatory and legal obligations.
For instance, by identifying information of lower priority, then moving that to storage infrastructure of similarly lower cost -- migrating it from, say, disk arrays to tape or optical media -- organizations can preserve what they need, yet drive down the associated operational overhead. It also becomes possible to identify what isn�t needed at all, eliminating it from the complete information infrastructure and freeing up much needed storage resources in the process. Rhinehart adds that �our solutions help our customers dispose of information in a defensible manner. You can�t just hit the delete key�
ECM solutions can add value by automating and optimizing those processes that are content centric. This is Advanced Case Management (ACM)
. According to Rhinehart, �ACM helps by addressing the ad-hoc, exception-oriented business processes where collaboration is key and where getting the right decision made is the desired outcome. Traditional BPM solutions aren�t the right approach for these processes. You wouldn�t want to use a shovel to drive in a nail. ACM enables a more dynamic solution development process avoiding many of the issues that make rolling out new applications a lot slower, harder and costlier than it should be.�
Some organizations may describe ACM solutions as dispute management, customer service resolution, care coordination, interventions or even claims processing. These cases are not a typical straight-through process. They involve invoices, contracts and other forms of enterprise content and tend to be customer centric. We have a major retailing chain that's doing this and they're now saving US$2.1 million a year in their call center on labor savings alone
Finally, content analytics
can provide some of the most interesting, and potentially explosive, possibilities for unstructured data in the enterprise today. Just as traditional analytics tools focus on database-driven content, ECM analytics capabilities focus on unstructured content -- surfing through it for patterns or trends, that (once implemented as strategies) can create new business value.
Rhinehart seems particularly impressed with the strides IBM has taken in this area in recent years, as exemplified by the success of the Watson project -- best known for having defeated Jeopardy champions in head-to-head, real-time competition.
�Watson uses IBM Content Analytics technology that is commercially available today for natural language processing. It�s being used to leverage and exploit enterprise content by understanding business insights currently trapped in content. Content Analytics is being used to detect fraud, solve crimes, improve healthcare research, find new business opportunities, understand the voice of the customer and more. Think Business Intelligence for content.�
I share his appreciation on both Content Analytics and Watson. Watson not only comprehends natural language queries, but also leverages many different analytics algorithms, running in parallel, to arrive at answers deemed likely to be accurate. This is well beyond the scope of ECM, or even enterprise IT as a whole, as it exists today.
�When you can pose questions to a computer in natural language, that's just a whole new ballgame --that�s something IT has never even tried to do before,� said Rhinehart. �I've heard it said that every computer before Watson is nothing but a big calculator. And I think there's a lot of truth in that.�Additional InformationLearn more about Enterprise Content ManagementCheck out Craig Rhinehart�s blogCheck out the Enterprise Content Management blogGain insight into the ECM Forum at Information On Demand 2011
About the author
Guest blogger Wes Simonds worked in IT for seven years before becoming a technology writer on topics including virtualization, cloud computing and service management. He lives in sunny Austin, Texas and believes Mexican food should always be served with queso.
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 12916
Yesterday (October 12) we published the first-ever Power Software Edition of the IBM Software Newsletter � a special issue �front-loaded� with Power Systems-related software news and content. You can read it on the web here.
We plan to publish two (2) of these Power Software Editions every year � and starting next month (November), we�ll be adding Power Systems-related content to our regular monthly issues � and offering additional Power content to subscribers who want it. Here�s all you have to do:
- If you�re already an IBM Software Newsletter subscriber (thank you!), update your subscription to include the new �Software for Power Systems� interest category.
- If you�re not a subscriber yet, subscribe today � and make sure you check the �Software for Power Systems� interest category when filling your subscriber profile.
Thanks � and watch for news about other special editions of the IBM Software Newsletter on the horizon!
Likes before 03/04/2016 - 0
Views before 03/04/2016 - 2923